Anda di halaman 1dari 54

Management Information Systems By Rahul Bhatia ASIA PACIFIC INSTITUTE OF MANAGEMENT

The Five Generations of Computers

First generation(1940-1956) Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts. The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Second Generation (1956-1963) Transistors


Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.

Third Generation (1964-1975) Integrated Circuits


The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.

Fourth Generation (1971-1989) Microprocessors


The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computerfrom the central processing unit and memory to input/output controlson a single chip. In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.

Fifth Generation (Present and Beyond) Artificial Intelligence


Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications , such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

Characteristics of computer
1. Speed The computer is a very high speed electronic device. The operations on the data inside the computer are performed through electronic circuits according to the given instructions. The data and instructions flow along these circuits with high speed that is close to the speed of light. Computer can perform million of billion of operations on the data in one second. The computer generates signals during the operation process therefore the speed of computer is usually measure in mega hertz (MHz) or gega hertz (GHz). It means million cycles units of frequency is hertz per second. Different computers have different speed. 2. Arithmetical and Logical Operations A computer can perform arithmetical and logical operations. In arithmetic operations, it performs the addition, subtraction, multiplication and division on the numeric data. In logical operation it compares the numerical data as well as alphabetical data. 3. Accuracy In addition to being very fast, computer is also very accurate device. it gives accurate output result provided that the correct input data and set of instructions are given to the computer. It means that output is totally depended on the given instructions and input data. If input data is in-correct then the resulting output will be in-correct. In computer terminology it is known as garbage-in garbage-out.

Characteristics of computer
4. Reliability The electronic components in modern computer have very low failure rate. The modern computer can perform very complicated calculations without creating any problem and produces consistent (reliable) results. In general, computers are very reliable. Many personal computers have never needed a service call. Communications are also very reliable and generally available whenever needed. 5. Storage A computer has internal storage (memory) as well as external or secondary storage. In secondary storage, a large amount of data and programs (set of instructions) can be stored for future use. The stored data and programs are available any time for processing. Similarly information downloaded from the internet can be saved on the storage media. 6. Retrieving data and programs The data and program stored on the storage media can be retrieved very quickly for further processing. It is also very important feature of a computer.

Characteristics of computer
7. Automation A computer can automatically perform operations without interfering the user during the operations. It controls automatically different devices attached with the computer. It executes automatically the program instructions one by one. 8. Versatility Versatile means flexible. Modern computer can perform different kind of tasks one by one of simultaneously. It is the most important feature of computer. At one moment your are playing game on computer, the next moment you are composing and sending emails etc. In colleges and universities computers are use to deliver lectures to the students. The talent of computer is dependent on the software. 9. Communications Today computer is mostly used to exchange messages or data through computer networks all over the world. For example the information can be received or send throug the internet with the help of computer. It is most important feature of the modern information technology.

Characteristics of computer
10. Diligence A computer can continually work for hours without creating any error. It does not get tired while working after hours of work it performs the operations with the same accuracy as well as speed as the first one.

11. No Feelings Computer is an electronic machine. It has no feelings. It detects objects on the basis of instructions given to it. Based on our feelings, taste, knowledge and experience: we can make certain decisions and judgments in our daily life. On the other hand, computer can not make such judgments on their own. Their judgments are totally based on instructions given to them.

Characteristics of computer
12. Consistency People often have difficulty to repeat their instructions again and again. For example, a lecturer feels difficulty to repeat a same lecture in a class room again and again. Computer can repeat actions consistently (again and again) without loosing its concentration: To run a spell checker (built into a word processor) for checking spellings in a document. To play multimedia animations for training purposes. To deliver a lecture through computer in a class room etc. A computer will carry out the activity with the same way every time. You can listen a lecture or perform any action again and again.

13. Precision Computers are not only fast and consistent but they also perform operations very accurately and precisely. For example, in manual calculations and rounding fractional values (That is value with decimal point can change the actual result). In computer however, you can keep the accuracy and precision upto the level, you desire. The length calculations remain always accurate.

Random access memory


Static RAM: No refreshing, 6 to 8 MOS transistors are required to form one memory cell, Information stored as voltage level in a flip flop. Dynamic RAM: Refreshed periodically, 3 to 4 transistors are required to form one memory cell, Information is stored as a charge in the gate to substrate capacitance.

Read only memory


Programmable ROM A programmable read-only memory (PROM) or field programmable read-only memory (FPROM) or onetime programmable non-volatile memory (OTP NVM) is a form of digital memory where the setting of each bit is locked by a fuse or antifuse. Such PROMs are used to store programs permanently. The key difference from a strict ROM is that the programming is applied after the device is constructed. These types of memories are frequently seen in video game consoles, mobile phones, radio-frequency identification (RFID) tags, implantable medical devices, high-definition multimedia interfaces (HDMI) and in many other consumer and automotive electronics products UVEPROM (ultra voilet erasable programmable ROM)

Read only memory


Electronically Erasable and Programmable ROM
EEPROM (also written E2PROM and pronounced "e-eprom," "double-e prom" or simply "e-squared") stands for Electrically Erasable Programmable Read-Only Memory and is a type of non-volatile memory used in computers and other electronic devices to store small amounts of data that must be saved when power is removed, e.g., calibration tables or device configuration. When larger amounts of static data are to be stored (such as in USB flash drives) a specific type of EEPROM such as flash memory is more economical than traditional EEPROM devices. EEPROMs are realized as arrays of floating-gate transistors.

EEPROM is user-modifiable read-only memory (ROM) that can be erased and reprogrammed (written to) repeatedly through the application of higher than normal electrical voltage generated externally or internally in the case of modern EEPROMs. Unlike EPROM chips, EEPROMs do not need to be removed from the computer to be modified. However, an EEPROM chip has to be erased and reprogrammed in its entirety, not selectively. It also has a limited life - that is, the number of times it can be reprogrammed is limited to tens or hundreds of thousands of times. In an EEPROM that is frequently reprogrammed while the computer is in use, the life of the EEPROM can be an important design consideration.

Flash memory

Local area network

Local area network


Definition: A local area network (LAN) supplies networking capability to a group of computers in close proximity to each other such as in an office building, a school, or a home. A LAN is useful for sharing resources like files, printers, games or other applications. A LAN in turn often connects to other LANs, and to the Internet or other WAN. Most local area networks are built with relatively inexpensive hardware such as Ethernet cables, network adapters, and hubs. Wireless LAN and other more advanced LAN hardware options also exist. Specialized operating system software may be used to configure a local area network. For example, most flavors of Microsoft Windows provide a software package called Internet Connection Sharing (ICS) that supports controlled access to LAN resources.

Local area network


A large PC or a minicomputer serves as the HUB of the LAN A high capacity hard disk attached to the HUB- serves as a file server PCs stationed in the various offices are attached to the network through communication cards and by cable running from the card to the network cable

Advantages of LAN
Workstations can share peripheral devices like printers. This is cheaper than buying a printer for every workstations. Workstations do not necessarily need their own hard disk or CD-ROM drives which make them cheaper to buy than stand-alone PCs. User can save their work centrally on the networks file server. This means that they can retrieve their work from any workstation on the network. They dont need to go back to the same workstation all the time. Users can communicate with each other and transfer data between workstations very easily. One copy of each application package such as a word processor, spreadsheet etc. can be loaded onto the file and shared by all users. When a new version comes out, it only has to be loaded onto the server instead of onto every workstation.

Disadvantages of connecting computers in a LAN Special security measures are needed to stop users from using programs and data that they should not have access to; Networks are difficult to set up and need to be maintained by skilled technicians. If the file server develops a serious fault, all the users are affected, rather than just one user in the case of a stand-alone machine.

10BASE2
10BASE2 (also known as cheapernet, thin Ethernet, thinnet, and thinwire) is a variant of Ethernet that uses thin coaxial cable or similar, as opposed to the thicker cable used in 10BASE5 networks), terminated with BNC connectors. During the mid to late 1980s this was the dominant 10 Mbit/s Ethernet standard,

Network design

10BASE2 coax cables had a

maximum length of 185

meters (607 ft). The maximum practical number of nodes that can be
connected to a 10BASE2 segment is limited to 30. In a 10BASE2 network, each segment of cable is connected to the transceiver (which is usually built into the network adaptor) using a BNC T-connector, with one segment connected to each female connector of the T.
As was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at each end. Each end of the cable had a 50 ohm ( ) resistor attached. Typically this resistor was built into a male BNC and attached to the last device on the bus. This is most commonly connected directly to the T-connector on a workstation though it does not technically have to be. A few devices such as Digital's DEMPR and DESPR had a built-in terminator and so could only be used at one physical end of the cable run. If termination was missing, or if there was a break in the cable, the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication would be able to take place.

Comparisons to 10BASE-T
10BASE2 networks cannot generally be extended without breaking service temporarily for existing users and the presence of many joints in the cable also makes them very vulnerable to accidental or malicious disruption. There were proprietary wallport/cable systems that claimed to avoid these problems (e.g. SaferTap) but these never became widespread, possibly due to a lack of standardization. 10BASE2 systems do have a number of advantages over 10BASE-T. They do not need the 10BASE-T hub, so the hardware cost is very low, and wiring can be particularly easy since only a single wire run is needed, which can be sourced from the nearest computer. These characteristics mean that 10BASE2 is ideal for a small network of two or three machines, perhaps in a home where easily concealed wiring may be an advantage. For a larger complex office network the difficulties of tracing poor connections make it impractical. Unfortunately for 10BASE2, by the time multiple home computer networks became common, the format had already been practically superseded. As a matter of fact, it is becoming very difficult to find 10BASE2-compatible network cards as distinct pieces of equipment, and integrated LAN controllers on motherboards don't have the connector, although the underlying logic may still be present.

BNC connector

Thin Ethernet (10B2 / IEEE 802.3a)

A summary of the properties of this type of cabling is given below: Segment length < 185m and > 0.5 m Up to 30 attached nodes Cable flexible and cheap Integrated or external transceiver connected via a BNC 'T' connector Used mainly for workgroups Difficult to manage (i.e. breaks in cable difficult to locate) Speed 10 MBPS, baseband, twisted pair

10BASE5
10BASE5 (also known as thick ethernet or thicknet) is the original "full spec" variant of Ethernet cable, using cable similar to RG-8/U coaxial cable but with extra braided sheiding. 10BASE5 has been superseded due to the immense demand for high speed networking, the low cost of Category 5 Ethernet cable, and the popularity of 802.11 wireless networks. Both 10BASE2 and 10BASE5 have become obsolete.

Name origination The name 10BASE5 is derived from several characteristics of the physical medium. The 10 refers to its transmission speed of 10 Mbit/s. The BASE is short for baseband signalling as opposed to broadband, and the 5 stands for the maximum segment length of 500 metres (1,600 ft). Network design 10BASE5 coax cables had a maximum length of 500 meters (1,640 ft). The maximum number of nodes that can be connected to a 10BASE5 segment is 100. Transceivers may be installed only at precise 2.5-metre intervals. This distance was chosen to not correspond to the wavelength of the signal; this ensures that the reflections from multiple taps are not in phase. These suitable points are marked on the cable with black bands. The cable must be one linear run; T-connections are not allowed. As is the case with most other high-speed buses, segments must be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each end of the cable has a 50 ohm ( ) resistor attached. Typically this resistor is built into a male N connector and attached to the end of the cable just past the last device. If termination is missing, or if there is a break in the cable, the AC signal on the bus is reflected, rather than dissipated when it reaches the end. This reflected signal is indistinguishable from a collision, and so no communication is possible. Transceivers can be connected to cable segments with N connectors, or via a vampire tap, which allows new nodes to be added while existing connections are live. A vampire tap clamps onto the cable, forcing a spike to pierce through the outer shielding to contact the inner conductor while other spikes bite into the outer braided shield. Care must be taken to keep the outer shield from touching the spike; installation kits include a "coring tool" to drill through the outer layers and a "braid pick" to clear stray pieces of the outer shield.

AUI cable

10 base T (Ethernet over twisted pair)


Ethernet over twisted pair refers to the use of cables that contain insulated copper wires twisted together in pairs for the physical layer of an Ethernet networkthat is, a network in which the Ethernet protocol provides the data link layer. Other Ethernet cable standards use coaxial cable or optical fiber. There are several different standards for this copper-based physical medium. The most widely used are 10BASE-T, 100BASE-TX, and 1000BASE-T, running at 10 Mbit/s (also Mbps or Mbs-1), 100 Mbit/s, and 1000 Mbit/s (1 Gbit/s), respectively. These three standards all use the same connectors. Higher speed implementations nearly always support the lower speeds as well, so that in most cases different generations of equipment can be freely mixed. They use 8 position modular connectors, usually called RJ45 in the context of Ethernet over twisted pair. The cables usually used are four-pair twisted pair cable (though 10BASE-T and 100BASE-TX only actually require two of the pairs). Each of the three standards support both full-duplex and half-duplex communication. According to the standards, they all operate over distances of up to 100 meters. The common names for the standards derive from aspects of the physical media. The number refers to the theoretical maximum transmission speed in megabits per second (Mbit/s). The BASE is short for baseband, meaning that there is no frequency-division multiplexing (FDM) or other frequency shifting modulation in use; each signal has full control of wire, on a single frequency. The T designates twisted pair cable, where the pair of wires for each signal is twisted together to reduce radio frequency interference and crosstalk between pairs (FEXT and NEXT). Where there are several standards for the same transmission speed, they are distinguished by a letter or digit following the T, such as TX.

Bits And Bytes Conversion Tables


Unit 1 Bit 8 Bits 1024 Bytes 1024 Kilobytes 1024 Megabytes 1024 Gigabytes 1024 Terabytes 1024 Petabytes 1024 Exabytes 1024 Zettabytes 1024 Yottabytes 1 Byte 1 Kilobyte 1 Megabyte 1 Gigabyte 1 Terabyte 1 Petabyte 1 Exabyte 1 Zettabyte 1 Yottabyte 1 Brontobyte Equals Binary Digit

Network topologies

10 BASE 2 BUS TOPOLOGY

BUS TOPOLOGY
1. Bus networks use a common backbone to connect all devices. A single cable, the backbone functions as a shared communication medium that devices attach or tap into with an interface connector. 2. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message. \ 3. Ethernet bus topologies are relatively easy to install and don't require much cabling compared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both were popular Ethernet cabling options many years ago for bus topologies. 4. However, bus networks work best with a limited number of devices. If more than a few dozen computers are added to a network bus, performance problems will likely result. In addition, if the backbone cable fails, the entire network effectively becomes unusable.

Advantages of Bus Topology


1. It is easy to handle and implement. 2. It is best suited for small networks. 3. Short cable length compared to star topology 4. Resilient architecture- inherent simplicity

Disadvantages of Bus Topology


1. The cable length is limited. This limits the number of stations that can be connected. 2. This network topology can perform well only for a limited number of nodes. 3. Fault diagnosing is difficult. 4. Reconfiguration with repeaters is difficult.

Ring topology
In a ring network, every device has exactly two neighbors for communication purposes. All messages travel through a ring in the same direction (either "clockwise" or "counterclockwise"). A failure in any cable or device breaks the loop and can take down the entire network. To implement a ring network, one typically uses FDDI, SONET, or Token Ring technology. Ring topologies are found in some office buildings or school campuses.

RING TOPOLOGY
Also known as a ring network, the ring topology is a type of computer network configuration where each network computer and device are connected to each other forming a large circle (or similar shape). Each packet is sent around the ring until it reaches its final destination. Today, the ring topology is seldom used.

Token ring
At the start, a free Token is circulating on the ring, this is a data frame which to all intents and purposes is an empty vessel for transporting data. To use the network, a machine first has to capture the free Token and replace the data with its own message. In the example above, machine 1 wants to send some data to machine 4, so it first has to capture the free Token. It then writes its data and the recipient's address onto the Token (represented by the yellow flashing screen). The packet of data is then sent to machine 2 who reads the address, realizes it is not its own, so passes it on to machine 3. Machine 3 does the same and passes the Token on to machine 4. This time it is the correct address and so number 4 reads the message (represented by the yellow flashing screen). It cannot, however, release a free Token on to the ring, it must first send the message back to number 1 with an acknowledgement to say that it has received the data (represented by the purple flashing screen). The receipt is then sent to machine 5 who checks the address, realizes that it is not its own and so forwards it on to the next machine in the ring, number 6. Machine 6 does the same and forwards the data to number 1, who sent the original message. Machine 1 recognizes the address, reads the acknowledgement from number 4 (represented by the purple flashing screen) and then releases the free Token back on to the ring ready for the next machine to use. That's the basics of Token Ring and it shows how data is sent, received and acknowledged, but Token Ring also has a built in management and recovery system which makes it very fault tolerant. Below is a brief outline of Token Ring's self maintenance system.

Token Ring Self Maintenance


When a Token Ring network starts up, the machines all take part in a negotiation to decide who will control the ring, or become the 'Active Monitor' to give it its proper title. This is won by the machine with the highest MAC address who is participating in the contention procedure, and all other machines become 'Standby Monitors'. The job of the Active Monitor is to make sure that none of the machines are causing problems on the network, and to re-establish the ring after a break or an error has occurred. The Active Monitor performs Ring Polling every seven seconds and ring purges when there appears to be a problem. The ring polling allows all machines on the network to find out who is participating in the ring and to learn the address of their Nearest Active Upstream Neighbour (NAUN). Ring purges reset the ring after an interruption or loss of data is reported. Each machine knows the address of its Nearest Active Upstream Neighbour. This is an important function in a Token Ring as it updates the information required to re-establish itself when machines enter or leave the ring. When a machine enters the ring it performs a lobe test to verify that its own connection is working properly, if it passes, it sends a voltage to the hub which operates a relay to insert it into the ring. If a problem occurs anywhere on the ring, the machine that is immediately after the fault will cease to receive signals. If this situation continues for a short period of time it initiates a recovery procedure which assumes that its NAUN is at fault, the outcome of this procedure either removes its neighbour from the ring or it removes itself.

Star topology 10BASE T (ethernet


over twisted cable)
Many home networks use the star topology. A star network features a central connection point called a "hub" that may be a hub, switch or router. Devices typically connect to the hub with Unshielded Twisted Pair (UTP) Ethernet.

to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computer's network access and not the entire LAN. (If the hub fails, however, the entire network also fails.)

STAR topology
APPLICATION-best way to integrate data transfer among several terminals. COMPLEXITY-very complex PERFORMANCE-is a direct function of capacity of the central node VULNERABILITY-extremely vulnerable if the switch malfunctions EXPANDIBILITY-severely restricted if all the ports are occupied

Advantages of Star Topology


1. Due to its centralized nature, the topology offers simplicity of operation. 2. It also achieves an isolation of each device in the network. 3. No contention for access, theoretically, there are no collisions between 2 systems. 4. Dedicated connection at the point of time when data transfer wants to take place

Disadvantage of Star Topology


1. The network operation depends on the functioning of the central hub. Hence, the failure of the central hub leads to the failure of the entire network. 2. Very long cable lengths. 3. Lot of heat is generated when all the wires run through wiring hubs. 4. Extremely expensive. 5. Difficult to expand under circumstances of congestion.

A co-axial cable

Maximum length-500 meters

Maximum length-182 meters

Maximum length-100 metres

LAN switch

8 port switch

16 port switch

UTP- unshielded twisted pair

Ethernet

Brief history of internet


The Internet was the result of some visionary thinking by people in the early 1960s who saw great potential value in allowing computers to share information on research and development in scientific and military fields. J.C.R. Licklider of MIT, first proposed a global network of computers in 1962, and moved over to the Defense Advanced Research Projects Agency (DARPA) in late 1962 to head the work to develop it. Leonard Kleinrock of MIT and later UCLA developed the theory of packet switching, which was to form the basis of Internet connections. Lawrence Roberts of MIT connected a Massachusetts computer with a California computer in 1965 over dial-up telephone lines. It showed the feasibility of wide area networking, but also showed that the telephone line's circuit switching was inadequate. Kleinrock's packet switching theory was confirmed. Roberts moved over to DARPA in 1966 and developed his plan for ARPANET. These visionaries and many more left unnamed here are the real founders of the Internet.

The Internet, then known as ARPANET, was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA) which initially connected four major computers at universities in the southwestern US (UCLA, Stanford Research Institute, UCSB, and the University of Utah). The contract was carried out by BBN of Cambridge, MA under Bob Kahn and went online in December 1969. By June 1970, MIT, Harvard, BBN, and Systems Development Corp (SDC) in Santa Monica, Cal. were added. By January 1971, Stanford, MIT's Lincoln Labs, Carnegie-Mellon, and Case-Western Reserve U were added. In months to come, NASA/Ames, Mitre, Burroughs, RAND, and the U of Illinois plugged in. After that, there were far too many to keep listing here.

The Internet was designed in part to provide a communications network that would work even if some of the sites were destroyed by nuclear attack. If the most direct route was not available, routers would direct traffic around the network via alternate routes. The early Internet was used by computer experts, engineers, scientists, and librarians. There was nothing friendly about it. There were no home or office personal computers in those days, and anyone who used it, whether a computer professional or an engineer or scientist or librarian, had to learn to use a very complex system.

E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available symbols on his teletype to link the username and address. The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC's are a means of sharing developmental work throughout community. The ftp protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC's were available electronically to anyone who had use of the ftp protocol. Libraries began automating and networking their catalogs in the late 1960s independent from ARPA. The visionary Frederick G. Kilgour of the Ohio College Library Center (now OCLC, Inc.) led networking of Ohio libraries during the '60s and '70s. In the mid 1970s more regional consortia from New England, the Southwest states, and the Middle Atlantic states, etc., joined with Ohio to form a national, later international, network. Automated catalogs, not very user-friendly at first, became available to the world, first through telnet or the awkward IBM variant TN3270 and only many years later, through the web. See The History of OCLC http://www.walthowe.com/navnet/history.html

Internet network map

A Router is a device that connects two networks - frequently over large distances. It understands one or more network protocols, such as IP or IPX. A Router accepts packets on at least two network interfaces, and forwards packets from one interface to another. Router's may be programmed to filter out some packets, and to dynamically change the route by which packets are routed. Router's often use different media on each interface. For instance, a router might have one Ethernet port and one ISDN port.

In information technology, a protocol (from the Greek protocollon, which was a leaf of paper glued to a manuscript volume, describing its contents) is the special set of rules that end points in a telecommunication connection use when they communicate. Protocols exist at several levels in a telecommunication connection. For example, there are protocols for the data interchange at the hardware device level and protocols for data interchange at the application program level. In the standard model known as Open Systems Interconnection (OSI), there are one or more protocols at each layer in the telecommunication exchange that both ends of the exchange must recognize and observe. Protocols are often described in an industry or international standard. On the Internet, there are the TCP/IP protocols, consisting of: Transmission Control Protocol (TCP), which uses a set of rules to exchange messages with other Internet points at the information packet level Internet Protocol (IP), which uses a set of rules to send and receive messages at the Internet address level Additional protocols that include the Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP), each with defined sets of rules to use with corresponding programs elsewhere on the Internet There are many other Internet protocols, such as the Border Gateway Protocol (BGP) and the Dynamic Host Configuration Protocol (DHCP).

How does the internet work

Anda mungkin juga menyukai