Anda di halaman 1dari 20

Different Types of Computers

Different types of Computers Based on the operational principle of computers, they are categorized as analog computers and hybrid computers. Analog Computers: These are almost extinct today. These are different from a digital computer because an analog computer can perform several mathematical operations simultaneously. It uses continuous variables for mathematical operations and utilizes mechanical or electrical energy. Hybrid Computers: These computers are a combination of both digital and analog computers. In this type of computers, the digital segments perform process control by conversion of analog signals to digital ones. Following are some of the other important types of computers. Mainframe Computers: Large organizations use mainframes for highly critical applications such as bulk data processing and ERP. Most of the mainframe computers have the capacities to host multiple operating systems and operate as a number of virtual machines and can thus substitute for several small servers. Microcomputers: A computer with a microprocessor and its central processing unit is known as a microcomputer. They do not occupy space as much as mainframes. When supplemented with a keyboard and a mouse, microcomputers can be called as personal computers. A monitor, a keyboard and other similar input output devices, computer memory in the form of RAM and a power supply unit come packaged in a microcomputer. These computers can fit on desks or tables and serve as the best choices for single-user tasks. Personal computers come in a variety of forms such as desktops, laptops and personal digital assistants. Let us look at each of these types of computers. Desktops: A desktop is intended to be used on a single location. The spare parts of a desktop computer are readily available at relative lower costs. Power consumption is not as critical as that in laptops. Desktops are widely popular for daily use in workplaces and households. Laptops: Similar in operation to desktops, laptop computers are miniaturized and optimized for mobile use. Laptops run on a single battery or an external adapter that charges the computer batteries. They are enabled with an inbuilt keyboard, touch pad acting as a mouse and a liquid crystal display. Its portability and capacity to operate on battery power have served as a boon for mobile users. Personal Digital Assistants (PDAs): It is a handheld computer and popularly known as a palmtop. It has a touch screen and a memory card for storage of data. PDAs can also be effectively used as portable audio players, web browsers and smart phones. Most of them can access the Internet by means of Bluetooth or Wi-Fi communication. Minicomputers: In terms of size and processing capacity, minicomputers lie in between mainframes and microcomputers. Minicomputers are also called mid-range systems or workstations. The term began to be popularly used in the 1960s to refer to relatively smaller third generation computers. They took up the space that would be needed for a refrigerator or

two and used transistor and core memory technologies. The 12-bit PDP-8 minicomputer of the Digital Equipment Corporation was the first successful minicomputer. Supercomputers: The highly calculation-intensive tasks can be effectively performed by means of supercomputers. Quantum physics, mechanics, weather forecasting, molecular theory are best studied by means of supercomputers. Their ability of parallel processing and their well-designed memory hierarchy give the supercomputers, Wearable Computers: A record-setting step in the evolution of computers was the creation of wearable computers. These computers can be worn on the body and are often used in the study of behavior modeling and human health. Military and health professionals have incorporated wearable computers into their daily routine, as a part of such studies. When the users hands and sensory organs are engaged in other activities, wearable computers are of great help in tracking human actions. Wearable computers are consistently in operation as they do not have to be turned on and off and are constantly interacting with the user

DIGITAL COMPUTER GENERATIONS


In the electronic computer world, we measure technological advancement by generations. A specific system is said to belong to a specific "generation." Each generation indicates a significant change in computer design. The UNIVAC I represents the first generation. Currently we are moving toward the fourth generation. FIRST GENERATION The computers of the first generation (1951-1958) were physically very large machines characterized by the vacuum tube (fig. 1-6). Because they used vacuum tubes, they were very unreliable, required a lot of power to run, and produced so much heat that adequate air conditioning was critical to protect the computer parts. Compared to today's computers, they had slow input and output devices, were slow in processing, and had small storage capacities. Many of the internal processing functions were measured in thousandths of a second (millisecond). The software (computer program) used on first generation computers was unsophisticated and machine oriented. This meant that the programmers had to code all computer instructions and data in actual machine language. They also had to keep track of where instructions and data were stored in memory. Using such a machine language (see chapter 3) was efficient for the computer but difficult for the programmer. Figure 1-6. - First generation computers used vacuum tubes.

SECOND GENERATION The computers of the second generation (1959-1963), were characterized by transistors (fig. 1-7) instead of vacuum tubes. Transistors were smaller, less expensive, generated almost no heat, and required very little power. Thus second generation computers were smaller, required less power, and produced a lot less heat. The use of small, long lasting transistors also increased processing speeds and reliability. Cost performance also improved. The storage capacity was greatly increased with the introduction of magnetic disk storage and the use of magnetic cores for main storage. High speed card readers, printers, and magnetic tape units were also introduced. Internal processing speeds increased. Functions were measured in millionths of a second (microseconds). Like the first generation, a particular computer of the second generation was designed to process either scientific or business oriented problems but not both. The software was also improved. Symbolic machine languages or assembly languages were used instead of actual machine languages. This allowed the programmer to use mnemonic operation codes for instruction operations and symbolic names for storage locations or stored variables. Compiler languages were also developed for the second generation computers Figure 1-7. - Second generation computers used transistors.

THIRD GENERATION The computers of this generation (1964-1970), many of which are still in use, are characterized by miniaturized circuits. This reduces the physical size of computers even more and increases their durability and internal processing speeds. One design employs solid-state logic microcircuits (fig. 1-8) for which conductors, resistors, diodes, and transistors have been miniaturized and combined on half-inch ceramic squares. Another smaller design uses silicon wafers on which the circuit and its components are etched. The smaller circuits allow for faster internal processing speeds resulting in faster execution of instructions. Internal processing speeds are measured in billionths of a second (nanoseconds). The faster computers make it possible to run jobs that were considered impractical or impossible on first or second generation equipment. Because the miniature components are more reliable, maintenance is reduced. New mass storage, such as the data cell, was introduced during this generation, giving a storage capacity of over 100 million characters. Drum and disk capacities and speed have been increased, the portable disk pack has been developed, and faster, higher density magnetic tapes have come into use. Considerable improvements were made to card readers and printers, while the overall cost has been greatly reduced. Applications using online processing, real-time processing, time sharing, multiprogramming, multiprocessing, and teleprocessing have become widely accepted. More on this in later chapters. Figure 1-8. - Third generation computers used microcircuits.

Manufacturers of third generation computers are producing a series of similar and compatible computers. This allows programs written for one computer model to run on most larger models of the same series. Most third generation systems are designed to handle both scientific and business data processing applications. Improved program and operating software has been designed to provide better control, resulting in faster processing. These enhancements are of significant importance to the computer operator. They simplify system initialization (booting) and minimize the need for inputs to the program from a keyboard (console intervention) by the operator. FOURTH GENERATION AND BEYOND The computers of the fourth generation are not easily distinguished from earlier generations, yet there are some striking and important differences. The manufacturing of integrated circuits has advanced to the point where thousands of circuits (active components) can be placed on a silicon wafer only a fraction of an inch in size (the computer on a chip). This has led to what is called large scale integration (LSI) and very large scale integration (VLSI). As a result of this technology, computers are significantly smaller in physical size and lower in cost. Yet they have retained large memory capacities and are ultra fast. Large mainframe computers are increasingly complex. Medium sized computers can perform the same tasks as large third generation computers. An entirely new breed of computers called microcomputers (fig. 1-9) and minicomputers are small and inexpensive, and yet they provide a large amount of computing power. Figure 1-9. - Fourth generation desktop (personal) computer.

What is in store for the future? The computer industry still has a long way to go in the field of miniaturization. You can expect to see the power of large mainframe computers on a single super chip. Massive data bases, such as the Navy's supply system, may be written into read-only memory (ROM) on a piece of equipment no bigger than a desktop calculator (more about ROM in chapter 2). The future challenge will not be in increasing the storage or increasing the computer's power, but rather in properly and effectively using the computing power available. This is where software (programs such as assemblers, report generators, subroutine libraries, compilers, operating systems, and applications programs) will come into play (see chapter 3). Some believe developments in software and in learning how to use these extraordinary, powerful machines we already possess will be far more important than further developments in hardware over the next 10 to 20 years. As a result, the next 20 years (during your career) may be even more interesting and surprising than the last 20 years.

MAIN MEMORY
The main memory in a computer is called Random Access Memory. It is also known as RAM. This is the part of the computer that stores operating system software, software applications and other information for the central processing unit (CPU) to have fast and direct access when needed to perform tasks. It is called "random access" because the CPU can go directly to any section of main memory, and does not have go about the process in a sequential order. RAM is one of the faster types of memory, and has the capacity to allow data to be read and written. When the computer is shut down, all of the content held in RAM is purged. Main memory is available in two types: Dynamic Random Access Memory (DRAM) and Static Random Access Memory (SRAM). Process The central processing unit is one of the most important components in the computer. It is where various tasks are performed and an output is generated. When the microprocessor completes the execution of a set of instructions, and is ready to carry out the next task, it retrieves the information it needs from RAM. Typically, the directions include the address where the information, which needs to be read, is located. The CPU transmits the address to the RAM's controller, which goes through the process of locating the address and reading the data. DRAM Dynamic random access memory (DRAM) is the most common kind of main memory in a computer. It is a prevalent memory source in PCs, as well as workstations. Dynamic random access memory is constantly restoring whatever information is being held in memory. It refreshes the data by sending millions of pulses per second to the memory storage cell. SRAM Static Random Access Memory (SRAM) is the second type of main memory in a computer. It is commonly used as a source of memory in embedded devices. Data held in SRAM does not have to be continually refreshed; information in this main memory remains as a "static image" until it is overwritten or is deleted when the power is switched off. Since SRAM is less dense and more power-efficient when it is not in use; therefore, it is a better choice than DRAM for certain uses like memory caches located in CPUs. Conversely, DRAM's density makes it a better choice for main memory. Adequate RAM The CPU is often considered the most important element in the performance of a personal computer. RAM probably comes in a close second. Having an adequate amount of RAM has a direct effect on the speed of the computer. A system that lacks enough main memory to run its applications must rely on the operating system to create additional memory resources from the hard drive by "swapping" data in and out. When the CPU must retrieve data from the disk instead of RAM, it slows down the performance of the computer. Many games, video-editing or graphics programs require a significant amount of memory to function at an optimal level.

System Requirements Having adequate main memory in a computer starts with meeting the recommended amount of memory for the operating system. Windows Vista Basic requires a minimum of 512MB of RAM; many computer experts suggest at least 1GB. The minimum requirement for Windows Home Premium, Business and Ultimate is 1GB. MAC OS 10.5 has a minimum requirement of 1GB of main memory.

Input device
An input device is any peripheral (piece of computer hardware equipment) used to provide data and control signals to an information processing system (such as a computer). Input and output devices make up the hardware interface between computer as a scanner or 6DOF controller. Many input devices can be classified according to:

modality of input (e.g. mechanical motion, audio, visual, etc.) the input is discrete (e.g. key presses) or continuous (e.g. a mouse's position, though digitized into a discrete quantity, is fast enough to be considered continuous) the number of degrees of freedom involved (e.g. two-dimensional traditional mice, or threedimensional navigators designed for CAD applications)

Pointing devices, which are input devices used to specify a position in space, can further be classified according to:

Whether the input is direct or indirect. With direct input, the input space coincides with the display space, i.e. pointing is done in the space where visual feedback or the cursor appears. Touchscreens and light pens involve direct input. Examples involving indirect input include the mouse and trackball. Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g. with a mouse that can be lifted and repositioned)

Note that direct input is almost necessarily absolute, but indirect input may be either absolute or relative. For example, digitizing Graphics tablets that do not have an embedded screen involve indirect input and sense absolute positions and are often run in an absolute input mode, but they may also be setup to simulate a relative input mode where the stylus or puck can be lifted and repositioned

Types of Input Devices


Mouse The mouse is a basic input device of the computer. Mice were an enhancement to the keyboard, which is the main input device of a computer. Mice allow the user to click on a button or interact with the computer without using both hands for typing. It is also an integral part of a Windows machine. Although most Windows forms have alternate key functions, the use of a mouse is more intuitive for end-users.

Joystick The joystick is used mainly for playing games. Joysticks were first used in console gaming, such as the Atari and Commodore 64. When computers began to be a part of every home, video games flooded the market. To improve on gamers' experiences, the hardware manufacturers developed joysticks that connected to computers. Joysticks give video games commands, so they are a part of the input device category. Keyboard The keyboard is the main input device for computers. For instance, boot up a computer without a keyboard and it stops, warning the user that no keyboard is attached. The keyboard is the only tool available at the command prompt, so it is a necessity for a computer. It is also used in almost every application like spreadsheets, email, word processing documents and coding. Scanner Scanners are devices that receive images from documents like images or paper. The scanner runs over the colors and writing on the paper, and the device sends it to the computer. Users normally have custom software that is included with the scanner. This software displays the images that were taking from the scanner's surface. Cameras Two types of cameras are used for input on a computer. The digital camera is a device that takes digital images and saves them to memory. The user then connects the camera to the computer where images are uploaded and saved. Web cams are the other type of camera that connects to the computer. Web cams are ways for people to take images from the computer and communicate visually with other users on the Internet.

OUTPUT DEVICE
An output device is any piece of computer hardware equipment used to communicate the results of data processing carried out by an information processing system (such as a computer) to the outside world. In computing, input/output, or I/O, refers to the communication between an information processing system (such as a computer), and the outside world. Inputs are the signals or data sent to the system, and outputs are the signals or data sent by the system to the outside. Examples of output devices:

Speakers Headphones Screen (Monitor) Printer

PAGING MEMORY
In computer operating systems, paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before the time paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems.[1] Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems, allowing them to use disk storage for data that does not fit into physical random-access memory (RAM). The main functions of paging are performed when a program tries to access pages that are not currently mapped to physical memory (RAM). This situation is known as a page fault. The operating system must then take control and handle the page fault, in a manner invisible to the program. Therefore, the operating system must: 1. Determine the location of the data in auxiliary storage. 2. Obtain an empty page frame in RAM to use as a container for the data. 3. Load the requested data into the available page frame. 4. Update the page table to show the new data. 5. Return control to the program, transparently retrying the instruction that caused the page fault. Because RAM is faster than auxiliary storage, paging is avoided until there is not enough RAM to store all the data needed. When this occurs, a page in RAM is moved to auxiliary storage, freeing up space in RAM for use. Thereafter, whenever the page in secondary storage is needed, a page in RAM is saved to auxiliary storage so that the requested page can then be loaded into the space left behind by the old page. Efficient paging systems must determine the page to swap by choosing one that is least likely to be needed within a short time. There are various page replacement algorithms that try to do this. Most operating systems use some approximation of the least recently used (LRU) page replacement algorithm (the LRU itself cannot be implemented on the current hardware) or working set based algorithm. If a page in RAM is modified (i.e. if the page becomes dirty) and then chosen to be swapped, it must either be written to auxiliary storage, or simply discarded. To further increase responsiveness, paging systems may employ various strategies to predict what pages will be needed soon so that it can preemptively load them.

FLOPPY DISK
A floppy disk is a data storage medium that is composed of a disk of thin, flexible ("floppy") magnetic storage medium sealed in a square or rectangular plastic carrier lined with fabric that removes dust particles. Floppy disks are read and written by a floppy disk drive or FDD,[1] Invented by the American information technology company IBM, floppy disks in 8 inch, 5 inch and 3 inch forms enjoyed nearly three decades as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have now been superseded by USB flash drives, external hard disk drives, optical discs, memory cards and computer networks. A small motor in the drive rotates the diskette at a regulated speed, while a second motor-operated mechanism moves the magnetic readwrite head, (or heads, if a double-sided drive) along the surface of the disk. Both read and write operations require physically contacting the readwrite head to the disk media, an action accomplished by a "disk load" solenoid.[2] To write data onto the disk, current is sent through a coil in the head. The magnetic field of the coil magnetizes spots on the disk as it rotates; the change in magnetization encodes the digital data. To read data, the tiny voltages induced in the head coil by the magnetization on the disk are detected, amplified by the disk drive electronics, and sent to the Floppy disk controller. The controller separates the data from the stream of pulses coming from the drive, decodes the data, tests for errors, and sends the data on to the host computer system. A blank diskette has a uniform featureless coating of magnetic oxide on it. A pattern of magnetized tracks, each broken up into sectors, is initially written to the diskette so that the diskette controller can find data on the disk. The tracks are concentric rings around the diskette, with spaces between the tracks where no data is written. Other gaps, where no user data is written, are provided between the sectors and at the end of the track to allow for slight speed variations in the disk drive. These gaps are filled with padding bytes that are discarded by the diskette controller. Each sector of data has a header that identifies the sector location on disk. An error checking cyclic redundancy check is written into the sector headers and at the end of the user data so that the diskette controller can detect errors when reading the data. Some errors (soft errors) can be handled by re-trying the read operation. Other errors are permanent and the disk controller will signal failure to the operating system if multiple tries cannot recover the data. Formatting a blank diskette is usually done by a utility program supplied by the computer operating system manufacturer. Generally the disk formatting utility will also set up an empty file storage directory system on the diskette, as well as initializing the sectors and tracks on a blank diskette. Areas of the diskette that can't be used for storage due to some flaw can be locked out so that the operating system does not attempt to use the "bad sectors". This could be quite time consuming, so many environments had an option to "quick format" which would skip the error checking process. During the heyday of diskette usage, diskettes preformatted for popular computers were sold. The flexible magnetic disk, commonly called floppy disk, [3] revolutionized computer disk storage for small systems and became ubiquitous in the 1980s and 1990s in their use with personal computers and home computers to distribute software, transfer data, and create backups. Before hard disks became affordable, floppy disks were often also used to store a computer's operating system (OS), in addition to application software and data. Most home computers had a primary OS (and often BASIC) stored permanently in on-board ROM, with the option of loading a more advanced disk operating system from a floppy, whether it be a proprietary system, CP/M, or later, DOS.

By the early 1990s, the increasing size of software meant that many programs demanded multiple diskettes; a large package like Windows or Adobe Photoshop could use a dozen disks or more. By 1996, there were an estimated five billion floppy disks in use.[4] Throughout the 1990s, distribution of larger packages was gradually switched to CD-ROM (or online distribution for smaller programs). Mechanically incompatible higher-density media were introduced (e.g. the Iomega Zip disk) and were briefly popular, but adoption was limited by the competition between proprietary formats, and the need to buy expensive drives for computers where the media would be used. In some cases, such as with the Zip drive, the failure in market penetration was exacerbated by the release of newer higher-capacity versions of the drive and media that were not backward compatible with the original drives, thus fragmenting the user base between new users and early adopters who were unwilling to pay for an upgrade so soon. A chicken or the egg scenario ensued, with consumers wary of making costly investments into unproven and rapidly changing technologies, with the result that none of the technologies were able to prove themselves and stabilize their market presence. Soon, inexpensive recordable CDs with even greater capacity, which were also compatible with an existing infrastructure of CD-ROM drives, made the new floppy technologies redundant. The last advantage of floppy disks, reusability, was diminished by the extremely low cost of CD-R media, and finally countered by re-writable CDs. Later, pervasive networking, as well as advancements in flash-based devices and widespread adoption of the USB interface provided another alternative that, in turn, made even optical storage obsolete for some purposes.

OPERATING SYSTEM
An operating system (OS) is software, consisting of programs and data, that runs on computers and manages computer hardware resources[1] and provides common services for efficient execution of various application software. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware,[2][3] although the application code is usually executed directly by the hardware, but will frequently call the OS or be interrupted by it. Operating systems are found on almost any device that contains a computerfrom cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems for personal computers are Microsoft Windows, Mac OS X, and GNU/Linux.[4]

Operating system types


As computers have progressed and developed so have the operating systems. Below is a basic list of the different operating systems and a few examples of operating systems that fall into each of the categories. Many computer operating systems will fall into more than one of the below categories. GUI - Short for Graphical User Interface, a GUI Operating System contains graphics and icons and is commonly navigated by using a computer mouse. See the GUI definition for a complete definition. Below are some examples of GUI Operating Systems.

System 7.x Windows 98 Windows CE Multi-user - A multi-user operating system allows for multiple users to use the same computer at the same time and different times. See the multi-user definition for a complete definition for a complete definition. Below are some examples of multi-user operating systems. Linux Unix Windows 2000 Multiprocessing - An operating system capable of supporting and utilizing more than one computer processor. Below are some examples of multiprocessing operating systems. Linux Unix Windows 2000 Multitasking - An operating system that is capable of allowing multiple software processes to run at the same time. Below are some examples of multitasking operating systems. Unix Windows 2000 Multithreading - Operating systems that allow different parts of a software program to run concurrently. Operating systems that would fall into this category are: Linux Unix Windows 2000 Troubleshooting Common questions and answers to operating systems in general can be found on the below operating system question and answers. All other questions relating to an operating system in particular can be found through the operating system page. Linux / Variants MacOSMS-DOS Unix / Variants Windows 95 Windows 98 Windows 98 SE Windows ME Windows NT Windows 2000 Windows XP Windows Vista Windows 7

COMPILER
A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language, often having a binary form known as object code). The most common reason for wanting to transform source code is to create an executable program. The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower level language (e.g., assembly language or machine code). If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, the compiler is known as a cross-compiler. A program that translates from a low level language to a higher level one is a decompiler. A program that translates between high-level languages is usually called a language translator, source to source translator, or language converter. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler is likely to perform many or all of the following operations: lexical analysis, preprocessing, parsing, semantic analysis (Syntax-directed translation), code generation, and code optimization. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementors invest a lot of time ensuring the correctness of their software. The term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexer and parser.

SECONDARY MEMORY
Secondary storage (also known as external memory or auxiliary storage), differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered downit is non-volatile. Per unit, it is typically also two orders of magnitude less expensive than primary storage. Consequently, modern computer systems typically have two orders of magnitude more secondary storage than primary storage and data is kept for a longer time there. In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. This illustrates the significant access-time difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.

When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.[3] Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives. The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information. Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As more of these retrievals from slower secondary storage are necessary, the more the overall system performance is degraded

ALGORITHM & FLOW CHART


In mathematics and computer science, an algorithm is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function[3]. Algorithms are used for calculation, data processing, and automated reasoning. Starting from an initial state and initial input (perhaps null)[4], the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input[7]. A partial formalization of the concept began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability"[8] or "effective method";[9] those formalizations included the GdelHerbrand Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 19367 and 1939.

Flow chart of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B A yields "yes" (or true) (more accurately the number b in location B is less than or equal to the number a in location A) THEN the algorithm specifies B B - A (meaning the number b - a replaces the old b). Similarly IF A > B THEN A A - B. The process terminates when (the contents of) B is 0, yielding the g.c.d. in A. (Algorithm derived from Scott 2009:13; symbols and drawing style from Tausworthe 1977).

DATABASE MANAGEMENT SYSTEM

A Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of a database. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.

Components

DBMS Engine accepts logical requests from various other DBMS subsystems, converts them into physical equivalents, and actually accesses the database and data dictionary as they exist on a storage device. Data Definition Subsystem helps the user create and maintain the data dictionary and define the structure of the files in a database. Data Manipulation Subsystem helps the user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interface between user and the information contained in a database. It allows the user to specify its logical information requirements. Application Generation Subsystem contains facilities to help users develop transaction-intensive applications. It usually requires that the user perform a detailed series of tasks to process a transaction. It facilitates easy-to-use data entry screens, programming languages, and interfaces. Data Administration Subsystem helps users manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.

NUMBER SYSTEMS
There are infinite ways to represent a number. The four commonly associated with modern computers and digital electronics are: decimal, binary, octal, and hexadecimal. Decimal (base 10) is the way most human beings represent numbers. Decimal is sometimes abbreviated as dec.

Decimal counting goes: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, and so on. Binary (base 2) is the natural way most digital circuits represent and manipulate numbers. (Common misspellings are "bianary", "bienary", or "binery".) Binary numbers are sometimes represented by preceding the value with '0b', as in 0b1011. Binary is sometimes abbreviated as bin. Binary counting goes: 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111, 10000, 10001, and so on. Octal (base 8) was previously a popular choice for representing digital circuit numbers in a form that is more compact than binary. Octal is sometimes abbreviated as oct. Octal counting goes: 0, 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 20, 21, and so on. Hexadecimal (base 16) is currently the most popular choice for representing digital circuit numbers in a form that is more compact than binary. (Common misspellings are "hexdecimal", "hexidecimal", "hexedecimal", or "hexodecimal".) Hexadecimal numbers are sometimes represented by preceding the value with '0x', as in 0x1B84. Hexadecimal is sometimes abbreviated as hex. Hexadecimal counting goes: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 10, 11, and so on. All four number systems are equally capable of representing any number. Furthermore, a number can be perfectly converted between the various number systems without any loss of numeric value. At first blush, it seems like using any number system other than human-centric decimal is complicated and unnecessary. However, since the job of electrical and software engineers is to work with digital circuits, engineers require number systems that can best transfer information between the human world and the digital circuit world. It turns out that the way in which a number is represented can make it easier for the engineer to perceive the meaning of the number as it applies to a digital circuit. In other words, the appropriate number system can actually make things less complicated.

CENTRAL PROCESSING UNIT(CPU)


The central processing unit (CPU) is the portion of a computer system that carries out the instructions of a computer program, and is the primary element carrying out the computer's functions. The central processing unit carries out each instruction of the program in sequence, to perform the basic arithmetical, logical, and input/output operations of the system. This term has been in use in the computer industry at least since the early 1960s.[1] The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation remains much the same. Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the

development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys. The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback. The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a program counter (PC), which stores a number that identifies the current position in the program. In other words, the program counter keeps track of the CPU's place in the program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units.[5] Often the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below). The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture (ISA).[6] Often, one group of numbers in the instruction, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured. After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, an arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set. The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs.[7] Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the

outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow. After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction that was jumped to, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as the "Classic RISC pipeline", which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontroller). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.

Anda mungkin juga menyukai