Anda di halaman 1dari 55


Secondary storage is a category of computer storage. It is used to store data that is not in
active use. It is usually slower and has higher capacity than primary storage, and is almost always non-volatile. Storage devices in this category include:

CD, CD-R, CD-RW DVD Floppy disk Hard disk Magnetic tape Paper tape Punch card Flash memory

1. CD
A compact disc (or CD) is an optical disc used to store digital data, originally developed for storing digital audio. A standard compact disc, often known as an audio CD to differentiate it from later variants, stores audio data in a format compliant with the red book standard. An audio CD consists of several tracks stored using 16-bit PCM coding at a sampling rate of 44.1 kHz. Most compact discs are 120 mm in diameter, which can store up to 74 minutes of audio. Compact disc technology was later adapted for use as a data storage device, known as a CD-ROM. Contents [hide] 1 History 2 Physical details 3 Audio format 4 Storage capacity 5 Recordability 6 Copy protection 7 Naming conventions 8 See also

An image of a compact disc - Pencil included for scale

The compact disc was developed in 1979 by Philips and Sony. Philips developed the general manufacturing process, based on their earlier Laserdisc technology, while Sony contributed the error-correction method. Early compact disc prototypes produced by Philips were 115 mm in diameter, with a 14 bit resolution and a 60 minute capacity. Sony insisted on a 16 bit resolution and 74 minute capacity, which increased the size of the disc to 120 mm. The reason for the increase in capacity is rumored to be to hold even the slowest versions of Beethoven's 9th Symphony. Compact discs were first mass produced in 1982, in Langenhagen near Hanover, Germany.

Physical details
Compact discs are made from a 1.2 mm thick disc of polycarbonate plastic coated with a much thinner aluminium (originally gold, which is sometimes still used for its data longevity) layer which is protected by a film of lacquer. The lacquer can be printed with a label. Common printing methods for compact discs are silkscreening and offset printing. CDs are available in a range of sizes but by far the most common is 120 mm in diameter, with a 74 minute audio capacity and a 650 MB data (See storage capacity). The information on a standard CD is encoded as a spiral track of pits moulded into the top of the polycarbonate layer (The areas between pits are known as lands). Each pit is approximately 125 nm deep by 500 nm wide, and varies from 850 nm to 3.5 m long. The spacing between the tracks is 1.5 m. To grasp the scale of the pits and land of a CD, if the disc is enlarged to the size of a stadium, a pit would be approximately the size of a grain of sand. The spiral begins at the center of the disc and proceeds outwards to the edge, which allows the different size formats available. A CD is read by focusing a 780 nm wavelength semiconductor laser through the bottom of the polycarbonate layer. The difference in height between pits and lands is one quarter of the wavelength of the laser light, leading to a half-wavelength phase difference between the light reflected from a pit and from its surrounding land. The destructive interference this causes reduces the intensity of the reflected light compared to when the laser is focused on just a land. By measuring this intensity with a photodiode, one is able to read the data from the disc. The pits and lands themselves do not represent the zeroes and ones of binary data. Instead a change from pit to land or land to pit indicates a one, while no change indicates a zero. This in turn is decoded by reversing the Eight-to-Fourteen Modulation used in mastering the disc, finally revealing the raw data stored on the disc.

Audio format
The data format of the disc, known as the 'Red Book' standard, was laid out by the Dutch electronics company Philips, who own the rights to the licensing of the 'CDDA' logo that appears on the disc. In broad terms the format is a two-channel stereo 16-bit PCM encoding at a 44.1 kHz sampling rate. Reed-Solomon error correction allows the CD to be scratched to a certain degree and still be played back. The unusual sampling rate of 44.1 kHz is inherited from a method of converting digital audio into a video signal for storage on video tape, which was the most affordable way to store it at the time the CD specification was being developed. This technology could store 3 samples in a single horizontal line. A standard NTSC video signal has 245 usable lines per field, and 60 fields a second, which indeed works out at 44,100 samples/second. Similarly PAL has 294 lines and 50 fields, which also gives 44,100 samples/second. This system could either store 14-bit samples with some error correction, or 16-bit samples with almost no error correction. There was a debate over whether to use 14- or 16-bit samples when they designed the compact disc; 16 bits prevailed.

Hence, the decision to use the 16-bit, 44.1 kHz sampling rate. The Sony PCM-1630, an early CD mastering machine, was just a modified U-Matic VCR.

Storage capacity
The compact disc specification recommends a linear velocity of 1.22 m/s and a track pitch of 1.59 micrometres. This leads to a maximum audio program length of 74 minutes on a 120 mm disc, or around 650 MB of data on a CD-ROM. However, in order to allow for variations in manufacturing, a disc with data appearing slightly more densely is allowable. By deliberately making a disc with this density, we can increase capacity and remain within or near spec. Using a linear velocity of 1.1975 m/s and a track pitch of 1.497 micrometres leads to a new maximum capacity of 79 minutes and 40 seconds, or 702 MB. Although such discs allow for little variation in manufacturing, they are generally reliable and only a small number of players are known to reject them. Some blank discs (see recordability) are available in 90 and even 99 minute configurations. Besides the increased density of their tracks, these run into two other technical problems. The first is that the maximum capacity a disc can declare itself as having is, according to the recordable CD specification, just under 80 minutes. The second is that timing markers on the disc with a value between 90 and 99 minutes are normally used to indicate to the player it is reading the beginning of the disc, not the end. These problems, as well as variable compatibility with CD recorders and software, mean discs larger than 80 minutes are generally regarded as a niche product. Another technique to increase the capacity of a disc is store data in the lead out groove that is normally used to indicate the end of a disk, and an extra minute or two of recording is often possible. However, these discs can cause problems in playback when the end of the disc is reached.

Injection molding is used to mass produce compact discs. A 'stamper' is made from the original media (audio tape, data disc, etc.) by writing to a photosensitive dye with a laser. This dye is then etched, leaving the data track. It is then plated to make a positive version of the CD. Polycarbonate is liquified and injected into the mold cavity where the stamper transfers the pattern of pits and lands to the polycarbonate disc. The disc is then metallized with aluminum and lacquer coated. However, there are also CD-recordable discs which can be recorded by a laser beam using a CD-R writer (most often connected to a computer, though standalone units are also available) and can be played on most compact disc players. CD-R recordings are permanent and cannot be recorded more than once, so the process is also called "burning" a CD. (See also CD burner and overburning.) CD-RW is a medium that allows multiple recordings on the same disc over and over again. A CDRW does not have as great a difference in the reflectivity of lands and bumps as a pressed CD or a CD-R, so many CD audio players cannot read CD-RW discs, although the majority of standalone DVD players can.

Recordable compact discs are injection molded with a "blank" data spiral. A photosensitive dye is then applied, and then the discs are metallized and lacquer coated. The write laser of the CD burner changes the characteristics of the dye to allow the read laser of a standard CD player to see the data as it would an injection molded compact disc.

Copy protection
The compact disc specification does not include any copy protection mechanism and discs can be easily duplicated or the contents "ripped" to a computer. Starting in early 2002, attempts were made by record companies to market "copy-protected" compact discs. These rely on deliberate errors being introduced into the data recorded on the disc. The intent is that the error-correction in a music player will enable music to be played as normal, while computer CD-ROM drives will fail with errors. This approach is the subject of an evolutionary arms race or cat-and-mouse game not all current drives fail, and copying software is being adapted to cope with these damaged data tracks. The recording industry then works on further approaches. Philips have stated that such discs, which do not meet the Red Book specification, are not permitted to bear the trademarked Compact Disc Digital Audio logo. It also seems likely that Philips' new models of CD recorders will be designed to be able to record from these 'protected' discs. However, there has been great public outcry over copy-protected discs because many see it as a threat to fair use.

2. CD-R

A CD-R (Compact Disc-Recordable) is a thin (1.2 mm) disc made of polycarbonate with a 120 mm or 80 mm diameter that is mainly used to store music or data. However, unlike conventional CD media, a CD-R has a core of dye instead of metal. A standard CD-R has a storage capacity of 74 minutes of audio or 650MiB of data (though MB is printed on CDs as the binary prefixes haven't caught on in the industry, MB will be used in this article). Non-standard CD-Rs are available with capacities of 80 minutes/703MB, which they achieve by exceeding the tolerances specified in the Orange Book CD standards. Most CD-Rs on the market are of the latter capacity. There are also 90 minute/790MB and 99 minute/870MB discs, though they are rare. The polycarbonate disc contains a spiral groove to guide the laser beam upon writing and reading information. The disc is coated on the side with the spiral groove with a very thin layer of a special dye and subequently with a thin, reflecting layer of silver, a silver alloy or gold. Finally, a protective coating of a photo-polymerizable lacquer is applied on top of the metal reflector and cured with UV-irradiation. A specially designed type of CD-ROM drive, called a CD-R drive, CD burner, or CD writer can be used to write CD-Rs. A laser is used to etch ("burn") small pits into the dye so that the disc can later be read by the laser in a CD-ROM drive or CD player. The laser used to write CD-Rs is an infrared laser which emits laser radiation at a wavelength of 780 nm. The reflectivity in the pit area is different (lower) than for the unchanged dye area, because the refractive index of the dye is lowered upon "burning" a pit. Upon reading back the stored information, the laser operates at a low enough power not to "burn" the dye and an optical pick-up records the changes in the intensity of the reflected laser radiation when scanning along the groove and over the pits. The change of the intensity of the reflected laser radiation is transformed into an electrical signal, from which the digital information is recovered ("decoded"). The decomposition of the dye in the pit area through the heat of the laser is irreversible (permanent). Therefore, once a section of a CD-R is written, it cannot be erased or rewritten, unlike a CD-RW. A CD-R can be recorded in multiple sessions.

Brief history
The CD-R was invented in 1988 by the Japanese company Taiyo Yuden. First CD-Rs were produced in 1994. Among the first manufacturers were the companies Taiyo Yuden, Kodak, Maxell, and TDK. Since then, the CD-R was further improved to allow writing speeds as fast as 54x (as of 2004) relative to the first 1x CD-Rs. The improvements were mainly due to optimisation of special dye compositions for CD-R, groove geometry, and the dye coating process. Low-speed burning at 1x is still used for special "audio CD-Rs", since CD-R audio recorders were standardized to this recording speed. There are three basic formulations of dye used in CD-Rs. 1. Cyanine dyes were the earliest ones developed, and their formulation is patented by Taiyo Yuden. Cyanine dyes are naturally green in color, and are chemically unstable. This makes cyanine discs unsuitable for archival use; they can fade and become unreadable in a few

years. Many manufacturers use proprietary chemical additives to make more stable cyanine discs. 2. Azo dye CD-Rs are blue in color, and their formulation is patented by Mitsubishi Chemicals. Unlike cyanine, azo dyes are chemically stable, and typically rated with a lifetime of decades. 3. Phthalocyanine dye CD-Rs are usually silver or gold. The patents on pthalocyanine CD-Rs are held by Mitsui and Ciba Specialty Chemicals. These are also chemically stable, and often given a rated lifetime of hundreds of years.

Note that unfortunately, many manufacturers add additional coloring to disguise their cyanine CDRs, so you cannot determine the formulation of a disc based purely on its color. Similarly, a gold reflective layer does not guarantee use of phthalocyanine dye. Note also that rated CD-R lifetimes are estimates based on accelerated aging tests, and lifetime can vary considerably based on how you store the discs. For optimum lifespan, CD-Rs should be stored vertically to prevent warping, inside archival plastic cases which use a ridged ring around the spindle which grips the disc. This ridge prevents the surface of the disc from coming into contact with anything during storage. Discs should be stored in cool, dark conditions, with controlled humidity. Avoid using any kind of label on the CD surface, and avoid use of printed inserts using anything other than water-based inks. Although the CD-R was initially developed in Japan, most of the production of CD-R had moved to Taiwan by 1998. Taiwanese manufacturers supplied more than 70% of the worldwide production volume of 10.5 billion CD-Rs in 2003. There was some incompatibility with CD-Rs and older CD-ROM drives. This was primarily due to the lower reflectivity of the CD-R disc. In general, CD drives marked as 8x or greater will read CD-R discs. Some DVD players will not read CD-Rs because of this change in reflectivity as well.

3. CD-RW

In computing and data storage, Compact Disc Rewritable, or CD-RW, is a rewritable version of CD-ROM. Whereas standard prerecorded compact discs have their information permanently stamped into an aluminium reflecting layer, CD-RW discs have a phase-change recording layer and an additional aluminium reflecting layer. A laser beam can melt crystals in the recording layer into a non-crystalline amorphous phase, or anneal them slowly at a lower temperature back to the crystalline state. The different reflectance of the resulting areas make them appear like the 'pits' and 'lands' of a standard CD. A CD-RW drive can write about 700MiB of data to CD-RW media an unlimited number of times. Most CD-RW drives can also write once to CD-R media. Except for the ability to completely erase a disc, CD-RWs act very much like CD-Rs and are subject to the same restrictions; i.e., they can be extended, but not selectively overwritten, and must be closed before they can be read in a normal CD-ROM drive. A variation of UDF formatting allows CD-RWs to be randomly read and written, but limits the capacity to about 500MB. Note that unlike CD-Rs, CD-RW discs are non-standard, in that they do not meet the Orange Book standards for CDs. Hence CD-RW media cannot be read by CD-ROM drives built prior to 1997 due to the reduced reflectivity (15% compared to 70%) of CD-RW media. CD-RW is also more expensive than CD-R, and so CD-R is sometimes considered a better technology for archival purposes. The write-once nature of CD-Rs also ensures that data cannot be accidentally modified or tampered with, and encourages better archival practices. However, due to the crystalline layer of CD-RWs (as opposed to the organic material used in CD-Rs), disc manufactures claim longer durability and better data safety of CD-RWs.

4. DVD

DVD is an optical disc storage media format that is used for playback of movies with high video and sound quality and for storing data. DVDs are similar in appearance to compact discs. Contents [hide] 1 History 2 Technical information 3 DVD-Video 3.1 Security 3.2 Region codes 4 DVD-Audio 5 DVD types 6 DVD players and recorders 7 Competitors and successors 8 See also 9 Bibliography 10 External links

During the early 1990s there were two high density optical storage standards in development; one was the Multimedia Compact Disc (MMCD), backed by Philips and Sony, and the other was the Super Disc (SD), supported by Toshiba, Time-Warner, Matsushita Electric, Hitachi, Mitsubishi Electric, Pioneer, Thomson and JVC. IBM led an effort to unite the various companies behind a single standard, anticipating a repeat of the costly format war between VHS and Betamax in the 1980s. The result was the DVD format, announced in September of 1995. The official DVD

specification was released in Version 1.0 in September, 1996. It is maintained by the DVD Forum, formerly the DVD Consortium, consisting of the ten founding companies and over 220 additional members. The first DVD players and discs were available in November of 1996 in Japan and in March of 1997 in the United States. By the northern spring of 1999, the price of a DVD player had dropped below the $300 mark. At that point Wal-Mart began to offer DVD players for sale in its stores. When Wal-Mart began selling DVDs in stores, DVDs only represented a small part of their video inventory; VHS tapes of movies made up the remainder. As of 2004, the situation is now reversed. Most retail stores mainly offer DVDs for sale, and VHS copies of movies make up a minority of the sales. The price of a DVD player has dropped to below the level of a typical VCR; a low-end player can be purchased for as little as $40 in a number of retail stores. In 2000, Sony released its PlayStation 2 console in Japan. In addition to playing video games developed for the system, it was also able to play DVD movies. In Japan, this proved to be a huge selling point due to the fact that the PS2 was much cheaper than many of the DVD players available there. As a result, many electronic stores that normally didn't carry video game consoles carried PS2s. Following on with this tradition Sony have decided to implement one of DVD's possible successors, Blu Ray, into their next PlayStation console currently known as the PlayStation 3 "DVD" was originally an initialism for "digital video disc"; some members of the DVD Forum believe that it should stand for "digital versatile disc", to indicate its potential for non-video applications. Toshiba, which maintains the official DVD Forum site, adheres to the interpretation of "digital versatile disc." The DVD Forum never reached a consensus on the matter, however, and so today the official name of the format is simply "DVD"; the letters do not "officially" stand for anything.[1] (

Technical information
A DVD can contain:

DVD-Video (containing movies (video and sound)) DVD-Audio (containing high-definition sound) DVD-Data (containing data)

The disc medium can be:

DVD-ROM (read only, manufactured by a press) DVD+R/RW (R=Recordable once, RW = ReWritable) DVD-R/RW (R=Recordable once, RW = ReWritable) DVD-RAM (random access rewritable; after-write checking of data integrity is always active.)

The disc may have one or two sides, and one or two layers of data per side; the number of sides and layers determines the disc capacity. As of 2004, the double sided formats have almost disappeared from the marketplace.

DVD-5: single sided, single layer, 4.7 gigabytes (GB), or 4.38 gibibytes (GiB) DVD-9: single sided, double layer, 8.5 GB (7.92 GiB) DVD-10: double sided, single layer on both sides, 9.4 GB (8.75 GiB) DVD-14: double sided, double layer on one side, single layer on other, 13.2 GB (12.3 GiB) DVD-18: double sided, double layer on both sides, 17.1 GB (15.9 GiB)

The capacity of a DVD-ROM can be visually determined by noting the number of data sides, and looking at the data side(s) of the disc. Double-layered sides are usually gold-colored, while singlelayered sides are silver-colored, like a CD. Each medium can contain any of the above content and can be any layer type (double layer DVDR is announced for 2004, while double layer DVD+R discs are already on the market, though scarce and expensive). The DVD Forum has created the official DVD-R(W) standards. But as the licensing cost for this technology is very high, another group was founded: the DVD+RW Alliance who created the DVD+R(W) standard with lower licensing costs. At first, DVD+R(W) media were typically more expensive than DVD-R(W) media, but the prices have become very comparable. The "+" (plus) and "-" (dash) are two similar technical standards that are partially compatible. As of 2004, both formats are equally popular, with about half of the industry supporting "+", and the other half "-". It is open to debate whether either format will push the other out of the market, or whether they will co-exist indefinitely. All DVD readers are supposed to read both formats (though real-world compatibility lies around 90% for both formats), and most current DVD writers can write both formats. Unlike compact discs, where sound (CDDA, Red Book) is stored in a fundamentally different fashion than data (Yellow book et al.), a properly authored DVD will always contain data in the UDF filesystem. The data transfer rate of a DVD drive is given in multiples of 1350 kB/s, which means that a drive with 16x speed designation allows a data transfer rate of 16 x 1350 = 21600 kB/s (21.09 MB/s). As CD drive speeds are given in mulitples of 150 kB/s, one DVD "speed" equals nine CD "speeds", i.e. 8x DVD drive should have data transfer rate similar to a 72x CD drive. In physical rotation terms (spins per second), one DVD "speed" equals three CD "speeds", so the amount of data that are read during one rotation is three times larger for DVD than for CD and 8x DVD drive has the same rotational speed as 24x CD drive. Note that both CD and DVD disks and drives usually have constant rotational speed while reading and data density on the track is also constant; as linear (meters per second) track speed grows at outer parts of the disk proportionally to the radius, the maximum data rate specified for the

drive/disk is achieved only at the end of the disk's track (disks are written from inside). Average speed of the drive therefore equals to only about 50-70% of the maximum nominated speed.

DVD-Video discs require a DVD-drive with a MPEG-2 decoder (eg. a DVD-player or a DVD computer drive with a software DVD player). Commercial DVD movies are encoded using a combination of MPEG-2 compressed video and audio of varying formats (often multi-channel formats as described below). Typical data rates for DVD movies range from 3-10 Mbit/s, and the bitrate is usually adaptive. A high number of audio tracks and/or lots of extra material on the disk will usually result in a lower bitrate (and image quality) for the main feature. The audio data on a DVD movie can be of the format PCM, DTS, MPEG audio, or Dolby Digital (AC-3). In countries using the NTSC standard any movie should contain a sound track in (at least) either PCM or Dolby AC-3 formats, and any NTSC player must support these two; all the others are optional. This ensures any standard compatible disc can be played on any standard compatible player. The vast majority of commercial NTSC releases today employ AC-3 audio. Initially, in countries using the PAL standard (e.g. most of Europe) the sound of DVD was supposed to be standardized on PCM and MPEG-2 audio, but apparently against the wishes of Philips, under public pressure on December 5, 1997, the DVD Forum accepted the addition of Dolby AC-3 to the optional formats on discs and mandatory formats in players. The vast majority of commercial PAL releases employ AC-3 audio by now. DVDs can contain more than one channel of audio to go together with the video content. In many cases, sound tracks in more than one language track are present (for example the film's original language as well as a dubbed track in the language of the country where the disc is being sold). With several channels of audio from the DVD, the cabling needed to carry the signal to an amplifier or TV can occasionally be somewhat frustrating. Most systems include an optional digital connector for this task, which is then paired with a similar input on the amplifier. The selected audio signal is sent over the connection, typically over RCA jacks or TOSLINK, in its original format to be decoded by the audio equipment. When playing compact discs, the signal is sent in S/PDIF format instead. Video is another issue which continues to present problems. Current players typically output analog video only, both composite video on an RCA jack, as well as S-Video in the standard connector. However neither of these connectors were intended to be used for progressive video, so yet another set of connectors has started to appear in the form of component video, which keeps the three components of the video, one luminance signal and two color difference signal, as stored on the DVD itself, on fully separate wires (whereas s-video uses two wires, uniting and degrading the two color signals, and composite only one, uniting and degrading all three signals). Additionally, the connectors are further confused by using a number of different physical connectors on different player models, RCA or BNC, as well as using VGA cables in a nonstandard way (VGA is normally analog RGB, not component). Even worse, there are often two sets of component outputs, one carrying interlaced video, and the other progressive. In Europe and

other PAL areas, SCART connectors are typically used, which carry both composite and analog RGB intelaced video signals, as well as analog 2-channel sound on a single multiwire cable, and which offer a reasonable compromise between video quality -- which is superior to S-Video though inferior to progressive component video -- and cost. DVD Video may also include one or more subtitle tracks in various languages, including those made especially for the hearing impaired. They are stored as images with transparent background which are overlayed over the video during playback. Subtitles are restricted to four colors (including transparency) and thus tend to look cruder than permanent subtitles on film. DVD Video may contain Chapters for easy navigation (and continuation of a partially watched film). If space permits, it is also possible to include several versions (called "angles") of certain scenes, though today this feature is mostly used -- if at all -- not to show different angles of the action, but as part of internationalization to e.g. show different language versions of images containing written text, if subtitles won't do. A major selling point of DVD Video is that its storage capacity allows for a wide variety of extra features in addition to the feature film itself. This can include audio commentary that is timed to the film sequence, documentary features, unused footage, trivia text commentary, simple games and film shorts.

Most DVD-Video titles use Content Scrambling System (CSS) encryption, which is intended to discourage people from making perfect digital copies to another medium or from bypassing the region control mechanism (see below). Discs can also specify that the player use Macrovision, an analog anti-copying mechanism that prevents the consumer from copying the video onto a VCR tape by using a deliberately-defective signal which may also cause problems for some projection TV's as well as older television models. This alone would not prevent the duplication of DVDs in their entirety without decrypting the data, given suitable equipment, although "consumer-grade" DVD writers deny this ability by refusing to duplicate the tracks on the disc which contain the decryption keys. The CSS system has caused problems for the inclusion of DVD players in strictly open source operating systems, since open source player implementations can not officially obtain access to the decryption keys or license the patents involved in the CSS system. Proprietary software players may also be difficult to find on some platforms. However at least one successful effort has been made to write a decoder by reverse engineering, resulting in DeCSS. This has led to long-running legal battles and the arrest of some of those involved in creating or distributing the DeCSS code, through the use of the U.S. Digital Millennium Copyright Act, on the grounds that such software could also be used to facilitate unauthorized copying of the data on the discs.

Region codes
DVD movies can contain a region code, denoting which area of the world it is targeted at, which is completely independent of encryption. The commercial DVD-video player specification dictates

that players must only play discs that contain their region code. This allows the film studios to set different retail prices in different markets and extract the maximum possible price from consumers. With region coding, studios can dictate release schedules and prices around the world. However, many DVD players allow playback of any disc, or can be modified to do so. Region coding pertains to regional lockout, which originated from the video game industry.

Region Area code 0 Playable in all regions 1 United States, Canada, and U.S. territories Western Europe, Greenland, South Africa, Lesotho, Swaziland, Japan, Egypt, and the 2 Middle East 3 Southeast Asia, South Korea, Hong Kong, Macau, Indonesia, Philippines, Taiwan 4 Australia, New Zealand, Oceania, Mexico, Central America, South America Russia, other Former Soviet Union countries, Eastern Europe, the Indian subcontinent, 5 Mongolia, North Korea, the rest of Africa 6 People's Republic of China 7 Reserved for future use 8 International venues such as aircraft, cruise ships, etc.

See a world map showing region codes ( European Region 2 DVDs may be sub-coded D1 through D4. "D1" identifies a UK-only release. "D2" and "D3" identify European DVDs that are not sold in the UK and the Republic of Ireland. "D4" identifies DVDs that are distributed throughout Europe. Region 0 designates no actual region, but it is used as shorthand for a disc meant to be playable on all players. On such a disc, the actual region coding is R1/2/3/4/5/6. In the early days, region 0 players were created that would allow any region disc to be played in them, but studios responded by adjusting regioned discs to refuse to play if the player was determined 0 (since no player should anyway). This system is known as Regional Coding Enhancement or just RCE. Many view region code enforcement as a violation of WTO free trade agreements; however, no legal rulings have yet been made in this area. However, many manufacturers of DVD players now freely supply information on how to disable the region code checking, and on some recent models, it appears to be disabled by default.


The Flaming Lips' CD and DVD release of Yoshimi Battles the Pink Robots, an early album that employed the DVD-audio format DVD-Audio is a new format to deliver high-fidelity audio content on a DVD. It offers many channels (from mono to 5.1 surround sound) at various sampling frequencies and sample sizes. Audio on a disc can be 16, 20, or 24 bit and can be at sampling rates of 44.1, 48, 88.2, 96, 176.4, or 192 kHz (the highest sampling rates of 176.4 and 192 kHz are limited to stereo only). In addition, different sampling sizes and frequencies can be used on a single disc. Audio is stored on the disc in LPCM format or is losslessly compressed with Meridian Lossless Packing. The DVD-Audio player may downmix surround sound to stereo if the listener does not have surround sound. DVDAudio may also feature menus, still images, slideshows, and video. Also, DVD-Audio discs usually contain Dolby Digital or DTS versions of the audio (with lossy compression, usually downsampled to lower sampling sizes and frequencies) in the DVD-Video section. This is done to ensure compatibility with DVD-Video players. The introduction of the DVD-Audio format angered many early-adopters of the DVD format. While the DVD-Audio discs do have higher fidelity, there is debate as to whether or not the difference is distinguishable to typical human ears. DVD-Audio currently forms a niche market, probably due to requiring new and rather expensive equipment. DVD-Audio is currently (as of 2003) in a format war with SACD. Most market observers believe the winner of the war will eventually supplant the Compact Disc due to its superior playback capabilities, unless a new and superior format takes over from either.

DVD types

A DVD-RAM is easily recognized due to the numerous rectangles on its surface.

DVD-ROM discs are pressed similarly to CDs. The reflective surface is silver or gold coloured. They can be single-sided single-layered, single-sided double-layered, double-

sided single-layered and double-sided double-layered. As of 2004, new double sided discs have become quite rare.

DVD recorders started to become available in Japan during 1999, and in the rest of the world soon after, with a familiar battle for format dominance beginning. As with the adoptance of USB, Apple computer was one of the early adopters of the technology. DVD recorders require a special unit to write and can use 1 or 2 disc sides (the disc capacity is measured in GB/side): o DVD-R discs can record up to 4.7 GB in a similar fashion to a CD-R disc. It is supported by the DVD Forum. Once recorded and finalized it can be played by most DVD-ROM players. o DVD-RW discs can record up to 4.7 GB in a similar fashion to a CD-RW drive. Supported by the DVD Forum. o DVD-RAM (current specification is version 2.1) require a special unit to play 4.7 or 9.4GB recorded discs (DVD-RAM disc are typically housed in a cartridge). 2.6GB discs can be removed from their caddy and used in DVD-ROM drives. Top capacity is 9.4GB (4.7GB/side). Supported by the DVD Forum. o DVD+R discs can record up to 4.7 GB single-layered, single-sided DVD+R disc. This is currently up to 16x speed. Like DVD-R you can record only once. Supported by the DVD+RW Alliance. o DVD+RW discs can record up to 4.7 GB with up to 4x speed. Since it is rewritable it can be overwritten several times. It does not need special "pre-pits" or finalization to be played in a DVD-Player. Supported by the DVD+RW Alliance. o DVD+R DL is a derivate of DVD+R that uses dual layer recordable discs to store up to 8.5 GB of data. Supported by the DVD+RW Alliance.

All above formats are also available as 8 cm (3 inch) sized DVD mini discs (not mini-DVD, which describes DVD data on a CD) with a disc capacity of 1.5 GB.

DVD players and recorders

Modern recorders often support additional formats, including DVD+/-R/RW, CD-R/RW, MP3, SVCD, JPEG, PNG, SVG, KAR and MPEG4 (DivX/XviD). Some also include USB ports or flash memory readers. Many are priced at under $/ 100.

Competitors and successors

There are two successors to DVD being developed by two different consortiums: The Blu-ray Disc and HD-DVD. On November 18, 2003, the Chinese news agency Xinhua reports the final standard of the Chinese government-sponsored Enhanced Versatile Disc (EVD) and several patents around it. On November 19, 2003 the DVD Forum decided with eight to six votes that HD-DVD is the HDTV successor of the DVD.


A floppy disk is a data storage device that comprises a circular piece of thin, flexible (hence "floppy") magnetic storage medium encased in a square or rectangular plastic wallet. Floppy disks are read and written by a floppy disk drive or FDD, not to be confused with "fixed disk drive", which is an old IBM term for a hard disk drive. Contents 1 Background 2 History [hide]

2.1 Origins, the 8-inch disk 2.2 The 5-inch minifloppy 2.3 The 3-inch Micro Floppy Diskette 2.4 The 3-inch Compact Floppy Disk 3 Structure 4 Compatibility 5 More on floppy disk formats 5.1 Using the disk space efficiently 5.2 The Commodore 128 5.3 The Commodore Amiga 5.4 The Acorn Archimedes 5.5 12-inch floppy disks 5.6 4-inch floppies

5.7 Auto-loaders 5.8 Floppy mass storage 5.9 2-inch floppy disks 5.10 Ultimate capacity, speed 6 Usability 7 The floppy as a metaphor 8 Floppy disk/drive trivia 9 See also 10 References Disk type Year Capacity 11 External links 8-inch 1971 80 kB 8-inch 1973 256 kB 8-inch 1974 800 kB 8-inch dual-sided 1975 1MB 5-inch 1976 110 kB 5-inch DD 1978 360 kB 5-inch QD 1984 1.2 MB 3-inch 1984? 320 kB 3-inch 1984 720 kB 3-inch HD Historical sequence of floppy-disk formats, ending with the last format (3-inch HD) to be ubiquitously adopted.

Floppy disks, also known as floppies or diskettes (a name chosen in order to be similar to the word "cassette"), were ubiquitous in the 1980s and 1990s, being used on home and personal computer ("PC") platforms such as the Apple II, Macintosh, Commodore 64, Amiga, and IBM PC to distribute software, transfer data between computers, and create small backups. Before the popularization of the hard drive for PCs, floppy disks were often used to store a computer's operating system (OS), application software, and other data. Many home computers had their primary OS kernels stored permanently in on-board ROM chips, but stored the disk operating system on a floppy, whether it be a proprietary system, CP/M, or, later, DOS.

By the early 1990s, the increasing size of software meant that many programs were distributed on sets of floppies. Toward the end of the 1990s, software distribution gradually switched to CDROM, and higher-density backup formats were introduced (e.g., the Iomega Zip disk). With the arrival of mass Internet access, cheap Ethernet, and USB "keydrives", the floppy was no longer necessary for data transfer either, and the floppy disk was essentially superseded. Mass backups were now made to high capacity tape drives such as DAT or streamers, or written to CDs or DVDs. One unsuccessful (in the marketplace) attempt in the late 1990s to continue the floppy was the SuperDisk (LS120) with a capacity of 120 MB while the drive was backward compatible with standard 3-inch floppies. Nonetheless, manufacturers were reluctant to remove the floppy drive from their PCs, for backward compatibility, and because many companies' IT departments appreciated a built-in file transfer mechanism that always worked and required no device driver to operate properly. Apple Computer was the first mass-market computer manufacturer to drop the floppy drive from a computer model altogether with the release of their iMac model in 1998, and Dell made the floppy drive optional in some models starting in 2003. To date, though, these moves have still not marked the end of the floppy disk as a mainstream means of data storage and exchange. External USB-based floppy disk drives are available for computers without floppy drives, and they work on any machine that supports USB. Floppy disks are almost universally referred to in imperial measurements, even in countries where metric is the standard. [Note: Throughout this article, the "K" is used to indicate the "binary kilo" (1,024).]

Origins, the 8-inch disk

An 8-inch floppy disk looks exactly like a big 5-inch disk (shown), with a partly exposed magnetic medium spun about a central hub for reading. The flexible plastic cover contains a cloth inner liner to brush dust from the medium. In 1967 IBM gave their San Jose, California storage development center a new task: develop a simple and inexpensive system for loading microcode into their System/370 mainframes. The 370s were the first IBM machines to use semiconductor memory, and whenever the power was turned off the microcode had to be reloaded ('magnetic core' memory, used in the 370s' predecessors, the

System/360 line, did not lose its contents when powered down). Normally this task would be left to various tape drives which almost all 370 systems included, but tapes were large and slow. IBM wanted something faster and more purpose-built that could also be used to send out updates to customers for $5. David Noble, working under the direction of Alan Shugart, tried a number of existing solutions to see if he could develop a new-style tape for the purpose, but eventually gave up and started over. The result was a read-only, 8-inch (20 cm) floppy they called the "memory disk", holding 80 kilobytes (KB). The original versions were simply the disk itself, but dirt became a serious problem and they enclosed it in a plastic envelope lined with fabric that would pick up the dirt. The new device became a standard part of the 370 in 1971. A Japanese inventor, Yoshiro Nakamatsu (aka Dr. NakaMats), claims he independently came up with the floppy disk principle back in 1950, and so a sales license had to be acquired by IBM when they started manufacturing their floppy disk systems twenty years later. In 1973 IBM released a new version of the floppy, this time on the 3740 Data Entry System. The new system used a different recording format that stored up to 256 KB on the same disks, and was read-write. These drives became common, and soon were being used to move smaller amounts of data around, almost completely replacing magnetic tapes. When the first microcomputers were being developed in the 1970s, the 8-inch floppy found a place on them as one of the few "high speed" 'mass storage' devices that were even remotely affordable to the target market (individuals and small businesses). The first microcomputer operating system, CP/M, originally shipped on 8-inch disks. However the drives were still expensive, typically costing more than the computer they were attached to in early days, so most machines of the era used cassette tape instead. This began to change with the acceptance of the first standard for the floppy disk, Ecma International-59, authored by Jim O'Reilly of Burroughs, Helmuth Hack of BASF and others. O'Reilly set a record for maneuvering this document through ECMA's approval process, with the standard sub-committee being formed in one meeting of ECMA and approval of a draft standard in the next meeting three months later. This standard later formed the basis for the ANSI standard, too. Standardization brought together a variety of competitors to make media to a single interchangeable standard, and allowed rapid quality and cost improvement. By this time Alan Shugart had left IBM, moved to Memorex for a brief time, and then again in 1973 to found Shugart Associates. They started working on improvements to the existing 8inch format, eventually creating a new 800 KB system. However profits were hard to find, and in 1974 he was forced out of his own company. Burroughs Corporation was meanwhile developing a high-performance dual-sided 8-inch drive at their Glenrothes, Scotland, factory. With a capacity of 1 MB, this unit exceeded IBM's drive capacity by 4 times, and was able to provide enough space to run all the software and store data on the new Burrough's B80 data entry system, which incidentally had the first VLSI disk controller in the industry. The dual-sided 1MB floppy entered production in 1975, but was plagued by an

industry problem, poor media quality. There were few tools available to test media for 'bit-shift' on the inner tracks, which made for high error rates, and the result was a substantial investment by Burroughs in a media tester design that they then gave to media makers as a quality control tool, leading to a vast improvement in yields.

The 5-inch minifloppy

In 1975, Burroughs' plant in Glenrothes developed a prototype 5.25-inch drive, stimulated both by the need to overcome the larger 8-inch floppy's asymmetric expansion properties with changing humidity, and, to reflect the knowledge that IBM's audio recording products division was demonstrating a dictation machine using 5.25" disks. In one of the industry's historic gaffs, Burroughs corporate management decided it would be "too inexpensive" to make enough money, and shelved the program. In 1976 one of Shugart [Assoc.]'s employees, Jim Adkisson, was approached by An Wang of Wang Laboratories, who felt that the 8-inch format was simply too large for the desktop word processing machines he was developing at the time. After meeting in a bar in Boston, Adkisson asked Wang what size he thought the disks should be, and Wang pointed to a napkin and said "about that size". Adkisson took the napkin back to California, found it to be 5 inches (13 cm) wide, and developed a new drive of this size storing 110 KB. The 5-inch drive was considerably less expensive than 8-inch drives from IBM, and soon started appearing on CP/M machines. At one point Shugart Assoc. was producing 4,000 drives a day. By 1978 there were more than 10 manufacturers producing 5-inch floppy drives, and the format quickly displaced the 8-inch from most applications. These early drives read only one side of the disk, leading to the popular budget approach of cutting a second write-enable slot and index hole into the carrier envelope and flipping it over (thus, the "flippy disk") to use the other side for additional storage. Tandon introduced a double-sided drive in 1978, doubling the capacity, and a new "double density" format increased it again, to 360 KB. For most of the 1970s and 1980s the floppy drive was the primary storage device for microcomputers. Since these micros had no hard drive, the OS was usually from one floppy disk, which was then removed and replaced by another one containing the application. Some machines using two disk drives (or one dual drive) allowed the user to leave the OS disk in place and simply change the application disks as needed. In the early 1980s, 96 track-per-inch drives appeared, increasing the capacity from 360 to 720 KB. These did not see widespread use, as they were not supported by IBM in its PCs. In 1984, along with the IBM PC/AT, the quad density disk appeared, which used 96 tracks per inch combined with a higher density magnetic media to provide 1.2 megabytes (MB) of storage. Since the usual (very expensive) hard disk held 1020 megabytes at the time, this was considered quite spacious. By the end of the 1980s, the 5-inch disks had been superseded by the 3-inch disks. Though 5inch drives were still available, as were disks, they faded in popularity as the 1990s began. On

most new computers the 5-inch drives were optional equipment. By the mid-1990s the drives had virtually disappeared as the 3-inch disk became the pre-eminent floppy disk.

The 3-inch Micro Floppy Diskette

The non-ferromagnetic metal sliding door protects the 3-inch floppy disk's recording medium. Throughout the early 1980s the limitations of the 5-inch format were starting to become clear as machines grew in power. A number of solutions were developed, with drives at 2-inch, 2-inch, 3inch and 3-inch (50, 60, 75 and 90 mm) all being offered by various companies. They all shared a number of advantages over the older format, including a small form factor and a rigid case with a slideable write protect catch. Amstrad incorporated a 3-inch 180 KB single-sided disk drive into their CPC and PCW lines, and this format and the drive mechanism was later "inherited" by the ZX Spectrum +3 computer after Amstrad bought Sinclair Research. Later models of the PCW featured double-sided, quad density drives while all 3-inch media were double-sided in nature with single-sided drive owners able to flip the disk over to use the other side. Media in this format remained expensive and it never caught on with only three manufacturers producing media Amstrad, Tatung and Maxell. Things changed dramatically in 1984 when Apple Computer selected the Sony 90.0 94.0 mm format for their Macintosh computers, thereby forcing it to become the standard format in the United States. (This is yet another example of a "silent" change from metric to imperial units; this product was advertised and became popularly known as the 3-inch disk, emphasizing the fact that it was smaller than the existing 5-inch.) The first computer to use this format was the HP150 of 1983. By 1989 the 3-inch was outselling the 5-inch. The 3-inch disks had, by way of their rigid case's slide-in-place metal cover, the significant advantage of being much better protected against unintended physical contact with the disk surface when the disk was handled outside the disk drive. When the disk was inserted, a part inside the drive moved the metal cover aside, giving the drive's read/write heads the necessary access to the magnetic recording surfaces. (Adding the slide mechanism resulted in a slight departure from the previous square outline. The rectangular shape had the additional merit that it made it impossible to insert the disk sideways by mistake, as had indeed been possible with earlier formats.) Like the 5-inch, the 3-inch disk underwent an evolution of its own. They were originally offered in a 360 KB single-sided and 720 KB double-sided double-density format (the same as then-current 5-inch disks). A newer "high-density" format, displayed as "HD" on the disks

themselves and storing 1.4 MB of data, was introduced in the mid-80s. IBM used it on their PS/2 series introduced in 1987. Apple started using "HD" in 1988, on the Macintosh IIx. Another advance in the oxide coatings allowed for a new "extended-density" ("ED") format at 2.88 MB introduced on the second generation NeXT Computers in 1991, but by the time it was available it was already too small to be a useful advance over 1.4 MB, and never became widely used. The 3-inch drives sold more than a decade later still used the same format that was standardized in 1989, in ISO 9529-1,2. Not long after the 2.88 MB format was declared DOA by the market, it became obvious that users had a requirement to move around ever increasing amounts of data. A number of products surfaced, but only a few maintained any level of backward compatibility with 3-inch disks. Insite Peripherals' "Floptical" was the first off the blocks, offering 20, 40 and ultimately 80 MB devices that would still read and write 1.4 MB disks. However, the drives did not connect to a normal floppy disk controller, meaning that many older PCs were unable to boot up from a disk in a Floptical drive. This again adversely affected adoption rates. Announced in 1995, the "Super Disk" drive, often seen with the brand names Matsushita (Panasonic) and Imation, had an initial capacity of 120 MB. It was subsequently upgraded to 240 MB. Not only could the drive read and write 1.4 MB disks, but the last versions of the drives could write 32 MB onto a normal 1.4 MB disk (see note below). Unfortunately, popular opinion held the Super Disk disks to be quite unreliable, though no more so than the Zip drives and SyQuest Technology offerings of the same period. This again, true or otherwise, crippled adoption. Thus 3-inch disks are still widely available. As of 2004 3-inch drives are still common equipment on most new PCs. On others, they are either optional equipment, or can be purchased as after-market equipment. However, with the advent of other portable storage options, such as Zip disks, USB storage devices, and (re)writable CD's the 3-inch disk is becoming increasingly obsolete. Some manufacturers have stopped offering 3-inch drives on new computers as standard equipment. The Apple Macintosh, which popularized the format in 1984, began to move away from it in 1998 with the iMac model. Possibly prematurely, since the basic model iMac of the time only had a CD-ROM drive giving users no easy access to removable media. This made USBconnected floppy drives a popular accessory for the early iMacs. The formatted capacity of 3-inch high-density floppies was originally 1440 kibibytes (KiB), or 1,474,560 bytes. This is equivalent to 1.41 MiB (1.47 MB decimal). However, their capacity is usually reported as 1.44 MB by diskette manufacturers. In some places, especially South Africa, 3-inch floppy disks have commonly been called stiffies or stiffy disks, because of their "stiff" (rigid) cases, which are contrasted with the flexible "floppy" cases of 5-inch floppies.

The 3-inch Compact Floppy Disk

A now unused semi-proprietary format, the 3-inch Compact Floppy was a format used mainly on the Amstrad CPC, PCW and ZX Spectrum computers while these machines were still supported, as well as on a number of exotic and obscure CP/M systems such as the Einstein or Osborne

computers and occasionally on MSX systems in some regions. The disk format itself was not more capient than the more popular (and cheap) 5" floppies, but was more reliable thanks to its hard casing. Their main problems were their high prices, due to their quite elaborate and complex case mechanisms and low nominal capacities, as well as their being bound to using specifically designed drives, which were very hard to repair or replace. Eventually, the format died out along with the computer systems that used it.


A user inserts the floppy disk, medium opening first, into a 5-inch floppy disk drive (pictured, an internal model) and moves the lever down (by twisting on this model) to close the drive and engage the motor and heads with the disk. The 5-inch disk had a large circular hole in the center for the spindle of the drive and a small oval aperture in both sides of the plastic to allow the heads of the drive to read and write the data. The magnetic medium could be spun by rotating it from the middle hole. A small notch on the right hand side of the disk would identify whether the disk was read-only or writable, detected by a mechanical switch or photo transistor above it. Another LED/phototransistor pair located near the center of the disk could detect a small hole once per rotation, called the index hole, in the magnetic disk. It was used to detect the start of each track, and whether or not the disk rotated at the correct speed; some operating systems, such as Apple DOS, did not use index sync, and often the drives designed for such systems lacked the index hole sensor. Disks of this type were said to be soft sector disks. Very early 8-inch and 5-inch disks also had physical holes for each sector, and were

termed hard sector disks. Inside the disk were two layers of fabric designed to reduce friction between the media and the outer casing, with the media sandwiched in the middle. The outer casing was usually a one-part sheet, folded double with flaps glued or spot-melted together. A catch was lowered into position in front of the drive to prevent the disk from emerging, as well as to raise or lower the spindle. The 3-inch disk is made of two pieces of rigid plastic, with the fabric-medium-fabric sandwich in the middle. The front has only a label and a small aperture for reading and writing data, protected by a spring-loaded metal cover, which is pushed back on entry into the drive.

The 3-inch floppy disk drive automatically engages when the user inserts a disk, and disengages and ejects with the press of a button, or by motor on the Apple Macintosh. The reverse has a similar covered aperture, as well as a hole to allow the spindle to connect into a metal plate glued to the media. Two holes, bottom left and right, indicate the write-protect status and high-density disk correspondingly, a hole meaning protected or high density, and a covered gap meaning write-enabled or low density. (Incidentally, the write-protect and high-density holes on a 3-inch disk are spaced exactly as far apart as the holes in punched A4 paper (8 cm), allowing write-protected floppies to be clipped into European ring binders.) A notch top right ensures that the disk is inserted correctly, and an arrow top left indicates the direction of insertion. The drive usually has a button that, when pressed, will spring the disk out at varying degrees of force. Some would barely make it out of the disk drive; others would shoot out at a fairly high speed. In a majority of drives, the ejection force is provided by the spring that holds the cover shut, and therefore the ejection speed is dependent on this spring. In PC-type machines, a floppy disk can be inserted or ejected manually at any time (evoking an error message or even lost data in some cases), as the drive is not continuously monitored for status and so programs can make assumptions that don't match actual status (ie, disk 123 is still in the drive and has not been altered by any other agency). With Apple Macintosh computers, disk drives are continuously monitored by the OS; a disk inserted is automatically searched for content and one is ejected only when the software agrees the disk should be ejected. This kind of disk drive (starting with the slim "Twiggy" drives of the late Apple "Lisa") does not have an eject button, but uses a motorized mechanism to eject disks; this action is triggered by the OS software (e.g. the user dragged the "drive" icon to the

"trash can" icon). Should this not work (as in the case of a power failure or drive malfunction), one can insert a straight-bent paper clip into a small hole at the drive's front, thereby forcing the disk to eject (similar to that found on CD/DVD drives). The 3-inch disk bears a lot of similarity to the 3-inch type, with some unique and somehow curious features. One example is the rectangular-shaped plastic casing, almost taller than a 3inch disk, but narrower, and more than twice as thick, almost the size of a standard compact audio cassette. This made the disk look more like a greatly oversized present day memory card or a standard PCMCIA notebook expansion card, rather than a floppy disk. Despite the size, the actual 3-inch magnetic-coated disk occupied less than 50 per cent of the space inside the casing, the rest being used by the complex protection and sealing mechanisms implemented on the disks. Such mechanisms were largely responsible for the thickness, length and high costs of the 3-inch disks. On the Amstrad machines the disks were typically flipped over to use both sides, as opposed to being truly double-sided. Double-sided mechanisms were available, but rare.

The three physical sizes of floppy disks are incompatible, and disks can only be loaded on the correct size of drive. There were some drives available with both 3-inch and 5-inch slots that were popular in the transition period between the sizes. However there are many more subtle incompatibilities within each form factor. Consider, for example the following Apple/IBM 'schism': Apple Macintosh computers can read, write and format IBM PC-format 3-inch diskettes, provided suitable software is installed. However, many IBM-compatible computers use floppy disk drives that are unable to read (or write) Apple-format disks. For details on this, see the section "More on floppy disk formats". Within the world of IBM-compatible computers, the three densities of 3-inch floppy disks are partly compatible. Higher density drives are built to read, write and even format lower density media without problems, provided the correct media is used for the density selected. However, if by whatever means a diskette is formatted at the wrong density, the result is a substantial risk of data loss due to magnetic mismatch between oxide and the drive head's writing attempts. The situation was even more complex with 5-inch diskettes. The head gap of a 1.2 MB drive is shorter than that of a 360 KB drive, but will format, read and write 360 KB diskettes with apparent success. A blank 360 KB disk formatted and written on a 1.2 MB drive can be taken to a 360 KB drive without problems, similarly a disk formatted on a 360 KB drive can be used on a 1.2 MB drive. But a disk written on a 360 KB drive and updated on a 1.2 MB drive becomes permanently unreadable on any 360 KB drive, owing to the incompatibility of the track widths. There are several other 'bad' scenarios. Prior to the problems with head and track size, there was a period when just trying to figure out which side of a "single sided" diskette was the right side was a problem. Both Radio Shack and Apple used 360 KB single sided 5-inch disks, and both sold disks labeled "single sided" were certified for use on only one side, even though they in fact were coated in magnetic material on both sides. The irony was that the disks would work on both Radio Shack and Apple machines, yet

the Radio Shack TRS-80 Model I computers used one side and the Apple II machines used the other, regardless of whether there was software available which could make sense of the other format. For quite a while in the 1980s, users could purchase a special tool called a "disk notcher" which would allow them to cut a second "write unprotect" notch in these diskettes and thus use them as "flippies" (either inserted as intended or upside down): both sides could now be written on and thereby the data storage capacity was practically doubled. Other users made do with a steady hand and a hole punch. For re-protecting a disk side, one would simply place a piece of opaque tape over the notch/hole in question. These "flippy disk procedures" were followed by owners of practically every home-computer single sided disk drives. Proper disk labels became quite important for such users.

More on floppy disk formats

Using the disk space efficiently
In general, data is written to floppy disks in a series of sectors, angular blocks of the disk, and in tracks, concentric rings at a constant radius, e.g. the HD format of 3-inch floppy disks uses 512 bytes per sector, 18 sectors per track, 80 tracks per side and two sides, for a total of 1,474,560 bytes per disk. (Some disk controllers can vary these parameters at the user's request, increasing the amount of storage on the disk, although these formats may not be able to be read on machines with other controllers; e.g. Microsoft applications were often distributed on 'Microsoft distribution format' disks, a hack that allowed 1.68 MB to be stored on a 3-inch floppy by formatting it with 21 sectors instead of 18, while these disks were still properly recognized by a standard controller.) On the IBM PC and also on the MSX, Atari ST, Amstrad CPC, and most other microcomputer platforms, disks are written using a Constant Angular Velocity (CAV) Constant Sector Capacity format. This means that the disk spins at a constant speed, and the sectors on the disk all hold the same amount of information on each track regardless of radial location. However, this is not the most efficient way to use the disk surface, even with available drive electronics. Because the sectors have a constant angular size, the 512 bytes in each sector are packed into a smaller length near the disk's center than nearer the disk's edge. A better technique would be to increase the number of sectors/track toward the outer edge of the disk, from 18 to 30 for instance, thereby keeping constant the amount of physical disk space used for storing each 512 byte sector. Apple implemented this solution in the early Macintosh computers by spinning the disk slower when the head was at the edge while keeping the data rate the same, allowing them to store 400 KB per side, amounting to an extra 80 KB on a double-sided disk. This higher capacity came with a serious disadvantage, though; the format required a special drive mechanism and control circuitry not used by other manufacturers, meaning that Mac disks could not be read on any other computers. Apple eventually gave up on the format and used standard HD floppy drives on their later machines.

The Commodore 128

The Commodore 128 used a special 3-inch 800 KB disk format with its 1581 disk drive (which was compatible with all 8-bit CBM serial-bus based machines). Commodore actually started its tradition of special disk formats with the 5-inch disk drives accompanying its PET/CBM, VIC-20 and C64 home computers, like the 1540 and (better-known) 1541 drives used with the latter two machines. These disk drives used Commodore's in-house developed Group Code Recording, based on up to four different data rates according to the track position. Eventually, however, Commodore had to give in to disk format standardization, and made its last 5-inch drives, the 1570 and 1571, compatible with Modified Frequency Modulation (MFM), to enable the C128 to work with CP/M disks from several vendors. Equipped with one of these drives, the C128 was able to access both C64 and CP/M disks, as it needed to, as well as MS-DOS disks (using third-party software), which was a crucial feature for some office work. A typical usage would be to copy MS-DOS text files off PCs at one's workplace and take the files home to edit on a C128.

The Commodore Amiga

The Commodore Amiga computers used other kinds of floppy disk optimizations for extra storage, mainly the use of smaller sector gaps, made possible by custom control of the floppy drive rather than using the IBM PC standard disk controller. This allowed 11 (512-byte) sectors per track instead of 9; a total of 880 KB on a DD floppy, and 1.76 MB on HD. Further tricks used by thirdparty developers, such as writing an entire track at once and removal of the generally unused "sector label" headers, allowed for 12 sectors per track and thus 960 KB on a standard DD floppy or 1.87 MB on HD. The Amiga OS constantly monitors changes in the floppy drive state. This allows the Amiga to immediately recognize when a disk has been inserted or removed. This removes the need for the user to respond by clicking a system request.

The Acorn Archimedes

Another machine using a similar "advanced" disk format was the British Acorn Archimedes, which could store 1.6 MB on a 3-inch HD floppy. It could also read and write disk formats from other machines, for example the Atari ST and the IBM PC. The Amiga's disks could not be read as they used a non-standard sector size and unusual sector gap markers.

12-inch floppy disks

In the late 1970s some IBM mainframes also used a 12-inch (30 cm) floppy disk, but little information is currently available about their internal format or capacity.

4-inch floppies
IBM in the mid-80's developed a 4-inch floppy. This program was driven by aggressive cost goals, but missed the pulse of the industry. The prospective users, both inside and outside IBM, preferred standardization to what by release time were small cost reductions, and were unwilling to retool packaging, interface chips and applications for a proprietary design. The product never appeared in the light of day, and IBM wrote off several hundred million dollars of development and manufacturing facility.

IBM developed, and several companies copied, an autoloader mechanism that could load a stack of floppies one at a time into a drive unit. These were very bulky systems, and suffered from media hangups and chew-ups more than anyone liked, but they were a partial answer to replication and large removable storage needs. The smaller 5.25 and 3.5-inch floppy made this a much easier technology to perfect.

Floppy mass storage

A number of companies, including IBM and Burroughs, experimented with using large numbers of unenclosed disks to create massive amounts of storage. The Burroughs system used a stack of 256 12-inch disks, spinning at high speed. The disk to be accessed was selected by using air jets to part the stack, and then a pair of heads flew over the surface as in any standard hard disk drive. This approach in some ways prefaced the Bernoulli disk from Iomega, but head crashes or air failures were spectacularly messy. Unfortunately, the program did not reach production.

2-inch floppy disks

A small floppy disk was also used in the late 1980s to store video information for still video cameras such as the Sony Mavica (not to be confused with current Digital Mavica models) and the Canon (company) Ion. This was not a digital data format; each track on the disk stored one video field from the interlaced composite video format. This yielded a capacity of 25 images per disk in frame mode and 50 in field mode. The same media was used digitally formatted - 720K double-sided, double-density - in the Zenith Minisport laptop computer circa 1989. Although the media exhibited nearly identical performance to the 3.5" disks of the time, it was not successful.

Ultimate capacity, speed

It is not easy to provide an answer for data capacity, as there are many factors involved, starting with the particular disk format used. The differences between formats and encoding methods can result in data capacities ranging from 720 kilobytes (KB) or less up to 1.72 megabytes (MB) or even more on a standard 3-inch high-density floppy, just from using special floppy disk software,

such as the fdformat utility which enables "standard" 3-inch HD floppy drives to format HD disks at 1.62, 1.68 or 1.72 MB, though reading them back on another machine is another story. These techniques require much tighter matching of drive head geometry between drives; this is not always possible and can't be relied upon. The LS-240 drive supports a (rarely used) 32MB capacity on standard 3" HD floppiesit is however, a write once technique, and cannot be used in a read/write/read mode. All the data must be read off, changed as needed, and rewritten to the disk. And it requires an LS-240 drive to read. Sometimes however, manufacturers provide an "unformatted capacity" figure, which is roughly 2.0 MB for a standard 3-inch HD floppy, and should imply that data density can't (or shouldn't) exceed a certain amount. There are however some special hardware/software tools, such as the CatWeasel floppy disk controller and software, which claim up to 2.23 MB of formatted capacity on a HD floppy. Such formats are not standard, hard to read in other drives and possibly even later with the same drive, and are probably not very reliable. It's probably true that floppy disks can surely hold an extra 1020% formatted capacity versus their "nominal" values, but at the expense of reliability or hardware complexity. 3-inch HD floppy drives typically have a transfer rate of 500 kilobaud. While this rate cannot be easily changed, overall performance can be improved by optimizing drive access times, shortening some BIOS introduced delays (especially on the IBM PC and compatible platforms), and by changing the sector:shift parameter of a disk, which is, roughly, the numbers of sectors that are skipped by the drive's head when moving to the next track. This happens because sectors aren't typically written exactly in a sequential manner but are scattered around the disk, which introduces yet another delay. Older machines and controllers may take advantage of these delays to cope with the data flow from the disk without having to actually stop it. By changing this parameter, the actual sector sequence may become more adequate for the machine's speed. For example, an IBM format 1.4 MB disk formatted with a sector:shift ratio of 3:2 has a sequential reading time (for reading ALL of the disk in one go) of just 1 minute, versus 1 minute and 20 seconds or more of a "normally" formatted disk. It's interesting to note that the "specially" formatted disk is veryif not completelycompatible with all standard controllers and BIOS, and generally requires no extra software drivers, as the BIOS generally "adapts" well to this slightly modified format.

One of the chief usability problems of the floppy disk is its vulnerability. Even inside a closed plastic housing, the disk medium is still highly sensitive to dust, condensation, and temperature extremes. As with any magnetic storage, it is also vulnerable to magnetic fields. Blank floppies have usually been distributed with an extensive set of warnings, cautioning the user not to expose it to conditions which can endanger it. Users damaging floppy disks (or their contents) were once a staple of "stupid user" folklore among computer technicians. These stories poked fun at users who stapled floppies to papers, made faxes

or photocopies of them when asked to "copy a disk", or stored floppies by holding them with a magnet to a file cabinet. The flexible 5-inch disk could also (folklorically) be abused by rolling it into a typewriter to type a label, or by removing the disk medium from the plastic enclosure to store it safely. On the other hand, the 3-inch floppy has also been lauded for its mechanical usability by HCI expert Donald Norman (here quoted from his book The Design of Everyday Things, Chapter 1): A simple example of a good design is the 3-inch magnetic diskette for computers, a small circle of "floppy" magnetic material encased in hard plastic. Earlier types of floppy disks did not have this plastic case, which protects the magnetic material from abuse and damage. A sliding metal cover protects the delicate magnetic surface when the diskette is not in use and automatically opens when the diskette is inserted into the computer. The diskette has a square shape: there are apparently eight possible ways to insert it into the machine, only one of which is correct. What happens if I do it wrong? I try inserting the disk sideways. Ah, the designer thought of that. A little study shows that the case really isn't square: it's rectangular, so you can't insert a longer side. I try backward. The diskette goes in only part of the way. Small protrusions, indentations, and cutouts, prevent the diskette from being inserted backward or upside down: of the eight ways one might try to insert the diskette, only one is correct, and only that one will fit. An excellent design.

The floppy as a metaphor

For more than two decades now, the floppy disk has been the primary external writable storage device used. Also, in a non-network environment, floppies have been the primary means of transferring data between computers (sometimes jokingly referred to as Sneakernet or Frisbeenet). Floppy disks are also, unlike hard disks, handled and seen; even a novice user can identify a floppy disk. Because of all these factors, the image of the floppy disk has become a metaphor for saving data, and the floppy disk symbol is often seen in programs on buttons and other user interface elements related to saving files.

Floppy disk/drive trivia

On the disk drives of the Atari ST (and possibly other computers as well) the drive activity indicator LEDs are software controllable. This was put to use in some games, for example in Lemmings, where the LED blinks as the three last building bricks are used by the bridge builder lemming. In the absence of audio cues, this was critical to prevent the builder lemming from dying. It was possible to manually force the movement of the drive head carriage in the Commodore 1541 and 1571 disk drives by the use of special commands. This was often used in demo programs to vibrate the head carriage against a "Track-0" head stop at varying frequencies to create music. The standard Commodore GCR scheme used in 1541 and compatibles employed the use of differing sectors depending upon track position: Tracks 1 to 17 had 21 sectors, 18 to 24 had 19, 25 to 30 had 18, and 31 to 35 had 17. This allowed the 1541 to maximize available

space to store 167k rather than, and contrary to common folklore, using a variable spindle speed. (The drive maintained 300rpm at all positions.)


A hard disk (or "hard disc" or "hard drive" or "hard disk drive") is a computer storage device. Contents 1 Mechanics 2 Performance 3 Access and interfaces 4 other characteristics 5 Addressing modes 6 Manufacturers 6.1 Firms that have come and gone 6.2 "Marketing" capacity versus true capacity 7 Hard disk usage 8 History 9 See also 10 External links [hide]

A hard disk uses rigid rotating platters. It stores and retrieves digital data from a planar magnetic surface. Information is written to the disk by transmitting an electromagnetic flux through an antenna or write head that is very close to a magnetic material, which in turn changes its polarization due to the flux. The information can be read back in a reverse manner, as the magnetic fields cause electrical change in the coil or read head that passes over it. A typical hard disk drive design consists of a central axis or spindle upon which the platters spin at a constant speed. Moving along and between the platters on a common armature are the read-write heads, with one head for each platter face. The armature moves the heads radially across the platters as they spin, allowing each head access to the entirety of the platter. The associated electronics control the movement of the read-write armature and the rotation of the disk, and perform reads and writes on demand from the disk controller. Modern drive electronics are capable of scheduling reads and writes efficiently across the disk, and of remapping sectors of the disk which have failed. Also, most major hard drive and motherboard vendors now support S.M.A.R.T. technology, by which impending failures can often be predicted, allowing the user to be alerted in time to prevent data loss. The (mostly) sealed enclosure protects the drive internals from dust, condensation, and other sources of contamination. The hard disk's read-write heads fly on an air bearing (a cushion of air) only nanometers above the disk surface. The disk surface and the drive's internal environment must therefore be kept immaculately clean, as fingerprints, hair, dust, and even smoke particles have mountain-sized dimensions when compared to the submicroscopic gap that the heads maintain.

Some people believe a disk drive contains a vacuum this is incorrect, as the system relies on air pressure inside the drive to support the heads at their proper flying height while the disk is in motion. Another common misconception is that a hard drive is totally sealed. A hard disk drive requires a certain range of air pressures in order to operate properly. If the air pressure is too low, the air will not exert enough force on the flying head, the head will not be at the proper height, and there is a risk of head crashes and data loss. (Specially manufactured sealed and pressurized drives are needed for reliable high-altitude operation, above about 10,000 feet. Please note this does not apply to pressurized enclosures, like an airplane cabin.) Some modern drives include flying height sensors to detect if the pressure is too low, and temperature sensors to alert the system to overheating problems.

The inside of a hard disk with the platter removed. To the left is the read-write arm. In the middle the electromagnets of the platter's motor can be seen. Hard disk drives are not airtight. They have a permeable filter (a breather filter) between the top cover and inside of the drive, to allow the pressure inside and outside the drive to equalize while keeping out dust and dirt. The filter also allows moisture in the air to enter the drive. Very high humidity year-round will cause accelerated wear of the drive's heads (by increasing stiction, or the tendency for the heads to stick to the disk surface, which causes physical damage to the disk and spindle motor). You can see these breather holes on all drives -- they usually have a warning sticker next to them, informing the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning disk platters. This air passes through an internal filter to remove any leftover contaminants from manufacture, any particles that may have somehow entered the drive, and any particles generated by head crash. Due to the extremely close spacing of the heads and disk surface, any contamination of the readwrite heads or disk platters can lead to a head crash a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film. For GMR heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) will still result in the head temporarily overheating, due to friction with the disk surface, and renders the disk unreadable until the head temperature stabilizes. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, wear and tear, or poorly manufactured disks. Normally, when powering down, a hard disk moves its heads to a safe area of the disk, where no data is ever kept (the landing zone). However, especially in old models, sudden power interruptions or a power supply failure can result in the drive shutting down with the heads in the data zone, which increases the risk of data loss. Newer drives are designed such that the

rotational inertia in the platters is used to safely park the heads in the case of unexpected power loss. In recent years, IBM pioneered drives with "head unloading" technology, where the heads are lifted off the platters onto "ramps" instead of having them rest on the platters. Other manufacturers have begun using this technology as well. Spring tension from the head mounting constantly pushes the heads towards the disk. While the disk is spinning, the heads are supported by an air bearing, and experience no physical contact wear. The sliders (the part of the head that is closest to the disk and contains the pickup coil itself) are designed to reliably survive a number of landings and takeoffs from the disk surface, though wear and tear on these microscopic components eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%. However, the decay rate is not linear when a drive is younger and has fewer start/stop cycles, it has a better chance of surviving the next startup than an older, higher-mileage drive (literally, as the head drags along the drive surface until the air bearing is established). For the Maxtor DiamondMax series of drives, for instance, the drive typically has a 0.02% chance of failing after 4,500 cycles, a 0.05% chance after 7,500 cycles, with the chance of failure rising geometrically to 50% after 50,000 cycles, and increasing ever after. Using rigid platters and sealing the unit allows much tighter tolerances than in a floppy disk. Consequently, hard disks can store much more data than floppy disk, and access and transmit it faster. In 2004, a typical workstation hard disk might store between 80 GB and 400 GB of data, rotate at 5,400 to 10,000 rpm, and have an average transfer rate of over 30 MB/s. The fastest workstation hard drives spin at 15,000 rpm. Notebook hard drives, which are physically smaller than their desktop counterparts, tend to be slower and have less capacity. Most spin at only 4,200 rpm or 5,400 rpm, though the newest top models spin at 7,200 rpm.

There are three primary factors that determine hard drive performance: seek time, latency and internal data transfer rate:

Seek time is a measure of the speed with which the drive can position its read/write heads over any particular data track. Because neither the starting position of the head nor the distance from there to the desired track is fixed, seek time varies greatly, and it is almost always measured as an average seek time, though full-track (the longest possible) and track-to-track (the shortest possible) seeks are also quoted sometimes. The standard way to measure seek time is to time a large number of disk accesses to random locations, subtract the latency (see below) and take the mean. Note, however, that two different drives with identical average seek times can display quite different performance characteristics. Seek time is always measured in milliseconds (ms), and often regarded as the single most important determinant of drive performance, though this claim is debated. (More on seek time.) All drives have rotational latency: the time that elapses between the moment when the read/write head settles over the desired data track and the moment when the first byte of the required data appears under the head. For any individual read or write operation, latency is

random between zero (if the first data sector happens to be directly under the head at the exact moment that the head is ready to begin reading or writing) and the full rotational period of the drive (for a typical 7200 rpm drive, just under 8.4 ms). However, on average, latency is always equal to one half of the rotational period. Thus, all 5400 rpm drives of any make or model have 5.56 ms latency; all 7200 rpm drives, 4.17 ms; all 10,000 rpm drives, 3.0 ms; and all 15,000 rpm drives have 2.0 ms latency. Like seek time, latency is a critical performance factor and is always measured in milliseconds. (More on latency.)

The internal data rate is the speed with which the drive's internal read channel can transfer data from the magnetic media. (Or, less commonly, in the reverse direction.) Previously a very important factor in drive performance, it remains significant but less so than in prior years, as all modern drives have very high internal data rates. Internal data rates are normally measured in Megabits per second (Mbit/s).

Subsidiary performance factors include:

Access time is simply the sum of the seek time and the latency. It is important not to mistake seek time figures for access time figures! The access time is by far the most important performance benchmark of a modern HDD. It almost alone defines how fast the disk performs in a typical system. However, people tend to pay much more attention to the data rates, which rarely make any significant difference in typical systems. Of course, in some usage scenarios it may be vise-versa, so you need to know your system before buying a HDD. The external data rate is the speed with which the drive can transfer data from its buffer to the host computer system. Although in theory this is vital, in practice it is usually a nonissue. It is a relatively trivial matter to design an electronic interface capable of outpacing any possible mechanical read/write mechanism, and it is routine for computer makers to include a hard drive controller interface that is significantly faster than the drive it will be attached to. As a general rule, modern ATA and SCSI interfaces are capable of dealing with at least twice as much data as any single drive can deliver; they are, after all, designed to handle two or more drives per bus even though a desktop computer usually mounts only one. For a single-drive computer, the difference between ATA-100 and ATA-133, for example, is largely one of marketing rather than performance. No drive yet manufactured can utilise the full bandwidth of an ATA-100 interface, and few are able to send more data than an ATA-66 interface can accept. The external data rate is usually measured in Megabytes per second. (MB/s note the upper-case "B".) Command overhead is the time it takes the drive electronics to interpret instructions from the host computer and issue commands to the read/write mechanism. In modern drives it is negligible.

Access and interfaces

A hard disk is generally accessed over one of a number of bus types, including ATA (IDE, EIDE), SCSI, FireWire/IEEE 1394, USB, and Fibre Channel. In late 2002 Serial ATA was introduced.

Back in the days of the ST-506 interface, the data encoding scheme was also important. The first ST-506 disks used Modified Frequency Modulation (MFM) encoding (which is still used on the common "1.44 MB" (1.4 MiB) 3.5-inch floppy), and ran at a data rate of 5 megabits per second. Later on, controllers using 2,7 RLL (or just "RLL") encoding increased this by half, to 7.5 megabits per second; it also increased drive capacity by half. Many ST-506 interface drives were only certified by the manufacturer to run at the lower MFM data rate, while other models (usually more expensive versions of the same basic drive) were certified to run at the higher RLL data rate. In some cases, the drive was overengineered just enough to allow the MFM-certified model to run at the faster data rate; however, this was often unreliable and was not recommended. (An RLL-certified drive could run on a MFM controller, but with 1/3 less data capacity and speed.) ESDI also supported multiple data rates (ESDI drives always used 2,7 RLL, but at 10, 15 or 20 megabits per second), but this was usually negotiated automatically by the drive and controller; most of the time, however, 15 or 20 megabit ESDI drives weren't downward compatible (i.e. a 15 or 20 megabit drive wouldn't run on a 10 megabit controller). ESDI drives typically also had jumpers to set the number of sectors per track and (in some cases) sector size. SCSI originally had just one speed, 5 MHz (for a maximum data rate of 5 megabytes per second), but this was increased dramatically later. The SCSI bus speed had no bearing on the drive's internal speed because of buffering between the SCSI bus and the drive's internal data bus; however, many early drives had very small buffers, and thus had to be reformatted to a different interleave (just like ST-506 drives) when used on slow computers, such as early IBM PC compatibles and Apple Macintoshes. ATA drives have typically had no problems with interleave or data rate, due to their controller design, but many early models were incompatible with each other and couldn't run in a master/slave setup (two drives on the same cable). This was mostly remedied by the mid-1990s, when ATA's specfication was standardised and the details begun to be cleaned up, but still causes problems occasionally (especially with CD-ROM and DVD-ROM drives, and when mixing Ultra DMA and non-UDMA devices). Serial ATA does away with master/slave setups entirely, placing each drive on its own channel (with its own set of I/O ports) instead.

other characteristics

capacity (measured in Gigabytes) interface performance MTBF (mean time between failures) power used (especially important in battery-powered laptops) audible noise (in dBA) G-shock rating (surprisingly high in modern drives)

Addressing modes
There are two modes of addressing the data blocks on more recent hard disks. The older one is the CHS addressing (Cylinder-Head-Sector), used on old ST-506 and ATA drives and internally by the PC BIOS, and the more recent one the LBA (Logical Block Addressing), used by SCSI drives and newer ATA drives (ATA drives power up in CHS mode for historical reasons). CHS describes the disk space in terms of its physical dimensions, data-wise; this is the traditional way of accessing a disk on IBM PC compatible hardware, and while it works well for floppies (for which it was originally designed) and small hard disks, it caused problems when disks started to exceed the design limits of the PC's CHS implementation. The traditional CHS limit was 1024 cylinders, 16 heads and 63 sectors; on a drive with 512-byte sectors, this comes to 504 MiB (528 megabytes). The origin of the CHS limit lies in a combination of the limitations of IBM's BIOS interface (which allowed 1024 cylinders, 256 heads and 64 sectors; sectors were counted from 1, reducing that number to 63, giving an addressing limit of 8064 MiB or just under 8 GiB), and a hardware limitation of the AT's hard disk controller (which allowed up to 65536 cylinders and 256 sectors, but only 16 heads, putting its addressing limit at 2^28 bits or 128 GiB). When drives larger than 504 MiB began to appear in the mid-1990s, many system BIOSes had problems communicating with them, requiring LBA BIOS upgrades or special driver software to work correctly. Even after the introduction of LBA, similar limitations reappeared several times over the following years: at 2.1, 4.2, 8.4, 32, and 128 GiB. The 2.1, 4.2 and 32 GiB limits are hard limits: fitting a drive larger than the limit results in a PC that refuses to boot, unless the drive includes special jumpers to make it appear as a smaller capacity. The 8.4 and 128 GiB limits are soft limits: the PC simply ignores the extra capacity and reports a drive of the maximum size it is able to communicate with. SCSI drives, however, have always used LBA addressing, which describes the disk as a linear, sequentially-numbered set of blocks. SCSI mode page commands can be used to get the physical specifications of the disk, but this is not used to read or write data; this is an artifact of the early days of SCSI, circa 1986, when a disk attached to a SCSI bus could just as well be an ST-506 or ESDI drive attached through a bridge (and therefore having a CHS configuration that was subject to change) as it could a native SCSI device. Because PCs use CHS addressing internally, the BIOS code on PC SCSI host adapters does CHS-to-LBA translation, and provides a set of CHS drive parameters that tries to match the total number of LBA blocks as closely as possible. ATA drives can either use their native CHS parameters (only on very early drives; hard drives made since the early 1990s use multiple-zone recording, and thus don't have a set number of sectors per track), use a "translated" CHS profile (similar to what SCSI host adapters provide), or run in ATA LBA mode, as specified by ATA-2. To maintain some degree of compatibility with older computers, LBA mode generally has to be requested explicitly by the host computer. ATA drives larger than 8 GiB are always accessed by LBA, due to the 8 GiB limit described above. See also: hard disk drive partitioning, master boot record, file system, drive letter assignment, boot sector.

Most of the world's hard disks are now manufactured by just a handful of large firms: Seagate, Maxtor, Western Digital, Samsung, and the former drive manufacturing division of IBM, now sold to Hitachi. Fujitsu continues to make specialist notebook and SCSI drives but exited the mass market in 2001. Toshiba is a major manufacturer of 2.5-inch notebook drives.

Firms that have come and gone

Dozens of former hard drive manufacturers have gone out of business, merged, or closed their hard drive divisions; as capacities and demand for products increased, profits became hard to find, and there were shakeouts in the late 1980s and late 1990s. The first notable casualty of the business in the PC era was Computer Memories International or CMI; after the 1985 incident with the faulty 20MB AT drives, CMI's reputation never recovered, and they exited the hard drive business in 1987. Another notable failure was MiniScribe, who went bankrupt in 1990 after it was found that they had "cooked the books" and inflated sales numbers for several years. Many other smaller companies (like Kalok, Microscience, LaPine, Areal, Priam and PrairieTek) also did not survive the shakeout, and had disappeared by 1993; Micropolis was able to hold on until 1997, and JTS, a relative latecomer to the scene, lasted only a few years and was gone by 1999. Rodime was also an important manufacturer during the 1980s, but stopped making drives in the early 1990s amid the shakeout and now concentrates on technology licensing; they hold a number of patents related to 3.5-inch form factor hard drives. There have also been a number of notable mergers in the hard disk industry:

Tandon sold its disk manufacturing division to Western Digital (which was then a controller maker and ASIC house) in 1988; by the early 1990s Western Digital disks were among the top sellers. Quantum bought DEC's storage division in 1994, and later (2000) sold the hard disk division to Maxtor to concentrate on tape drives. In 1995, Conner Peripherals announced a merger with Seagate (who had earlier bought Imprimis from CDC), which completed in early 1996. JTS infamously merged with Atari in 1996, giving it the capital it needed to bring its drive range into production. In 2003, following the controversy over the mass failures of the Deskstar 75GXP range (which resulted in lost sales of its follow-ons), hard disk pioneer IBM sold the majority of its disk division to Hitachi, who renamed it Hitachi Global Storage Technologies.

"Marketing" capacity versus true capacity

It is important to note that hard drive manufacturers often use the decimal definition of a gigabyte or megabyte. As a result, after the drive is installed it appears that a few gigabytes or megabytes have disappeared. In reality computers operate based upon the binary numeral system. In the decimal number system a gigabyte is 7.5% smaller than in the binary number system. The term "1.44 MB" often used to describe 1440 KB floppies (actually 1.47 MB or 1.4 MiB) introduced an anomalous definition of "megabyte" as 1 x 10^3 x 2^10 bytes (1 KKiB).

Hard disk usage

From the original use of a hard drive in a single computer, techniques for guarding against hard disk failure were developed such as the redundant array of independent disks (RAID). Hard disks are also found in network attached storage devices, but for large volumes of data are most efficiently used in a storage area network.

The first computer with a hard disk drive as standard was the IBM 350 Disk File, introduced in 1955 with the IBM 305 computer. This drive had fifty 24 inch platters, with a total capacity of five million characters. In 1952, an IBM engineer named Reynold Johnson developed a massive hard disk consisting of fifty platters, each two feet wide, that rotated on a spindle at 1200 rpm with read/write heads for the first database running RCAs Bismark computer. In 1973, IBM introduced the 3340 "Winchester" disk system (the 30Mb + 30 millisecond access time led the project to be named after the Winchester 30-30 rifle), the first to use a sealed head/disk assembly (HDA). Almost all modern disk drives now use this technology, and the term "Winchester" became a common description for all hard disks. For many years, hard disks were large, cumbersome devices, more suited to use in the protected environment of a data center or large office than in a harsh industrial environment (due to their delicacy), or small office or home (due to their size and power consumption). Before the early 1980s, most hard disks had 8-inch or 14-inch platters, required an equipment rack or a large amount of floor space (especially the large removable-media drives, which were often referred to as "washing machines"), and in many cases needed special power hookups for the large motors they used. Because of this, hard disks were not commonly used with microcomputers until after 1980, when Seagate Technology introduced the ST-506, the first 5.25-inch hard drive, with a capacity of 5 megabytes. In fact, in its factory configuration the original IBM PC (IBM 5150) was not equipped with a hard drive. Most microcomputer hard disk drives in the early 1980s were not sold under their manufacturer's names, but by OEMs as part of larger peripherals (such as the Corvus Disk System and the Apple ProFile). The IBM PC/XT had an internal hard disk, however, and this started a trend toward buying "bare" drives (often by mail order) and installing them directly into a system. Hard disk makers started marketing to end users as well as OEMs, and by the mid-1990s, hard disks had become available on retail store shelves. While internal drives became the system of choice on PCs, external hard drives remained popular for much longer on the Apple Macintosh and other platforms. Every Mac made between 1986 and 1998 has a SCSI port on the back, making external expansion easy; also, "toaster" Macs did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive bay at all), so on those models, external SCSI disks were the only reasonable option. External SCSI drives were also popular with older microcomputers such as the Apple II series and the Commodore 64, and were also used extensively in servers, a usage which is still popular today. The appearance in

the late 1990s of high-speed external interfaces such as USB and IEEE 1394 (FireWire) has made external disk systems popular among regular users once again, especially for users that move large amounts of data between two or more locations, and most hard disk makers now make their disks available in external cases. The capacity of hard drives has grown exponentially over time. With early personal computers, a drive with a 20 megabyte capacity was considered large. In the latter half of the 1990's, hard drives with capacities of 1 gigabyte and greater became available. As of early 2005, the "smallest" desktop hard disk in production has a capacity of 40 gigabytes, while the largest-capacity drives approach one half terabyte (500 gigabytes), and are expected to exceed that mark by year's end.


Magnetic tape is an information storage medium consisting of a magnetisable coating on a thin plastic strip. Nearly all recording tape is of this type, whether used for video with a video cassette recorder, audio storage (reel-to-reel tape, compact audio cassette, digital audio tape (DAT), digital linear tape (DLT) and other formats including 8-track cartridges) or general purpose digital data storage using a computer (specialized tape formats, as well as the above-mentioned compact audio cassette, used with home computers of the 1980s, and DAT, used for backup in workstation installations of the 1990s). Magneto-optical and optical tape storage products have been developed using many of the same concepts as magnetic storage, but have achieved little commercial success. Contents [hide] 1 Magnetic tape audio storage 2 Magnetic tape video storage 3 Magnetic tape data storage 4 See also 5 References

Magnetic tape audio storage

See: Sound Recording: Magnetic Recording

Magnetic tape video storage

Magnetic tape is a common video storage medium, especially for recording. At home, VHS cassettes are omnipresent while DV has become the standard for consumer camcorders, and at TV studios digital video cassettes such as DVCPRO, DVCAM and Digital Betacam have been common for years.

Magnetic tape data storage

half-inch reel tape

Magnetic tape was first invented by Fritz Pfleumer in 1928 in Germany, based on the invention of the magnetic wire by Valdemar Poulsen in 1898. It was not used to record data until 1951 on the Mauchly-Eckert UNIVAC I. The recording medium was a 1/2 inch wide thin band of nickel-plated bronze. Recording density was 128 characters per inch on eight tracks at a linear speed of 100 ips, yielding a data rate of 12,800 characters per second. Making allowance for the empty space between tape blocks, the actual transfer rate was around 7,200 characters per second. IBM computers from the 1950s used oxide-coated tape similar to that used in audio recording, and IBM's technology soon became the de facto industry standard. Magnetic tape was half an inch wide and wound on removable reels 10.5 inches in diameter. Different lengths were available with 2400 feet and 4800 feet being common. Most modern magnetic tape systems use reels that are much smaller and are fixed inside a cartridge to protect the tape and facilitate handling. Modern cartridge formats include QIC, DAT, and Exabyte. Early IBM tape drives were mechanically sophisticated floor-standing drives that used vacuum columns to buffer long u-shaped loops of tape. Between active control of powerful reel motors and vacuum control of these u-shaped tape loops, extremely rapid start and stop of the tape at the tapeto-head interface could be achieved. When active, the two tape reels thus spun in rapid, uneven, unsynchronized bursts resulting in visually-striking action. Stock shots of such vacuum-column tape drives in motion were widely used to represent "the computer" in movies and television. LINCtape (and its derivative, DECtape) were variations on this "round tape." They were essentially a personal storage medium. They featured a fixed formatting track which, unlike standard tape, made it feasible to read and rewrite blocks repeatedly in place. LINCtapes and DECtapes had similar capacity and data transfer rate to the diskettes that displaced them, but their "seek times" were on the order of thirty seconds to a minute.

cartridge tapes in drives A tape drive (or "transport" or "deck") uses precisely-controlled motors to wind the tape from one reel to the other, passing a read/write head as it does. Early tape had seven parallel tracks of data along the length of the tape allowing six bit characters plus parity written across the tape. A typical recording density was 556 characters per inch. The tape had reflective marks near its end which

signaled beginning of tape (BOT) and end of tape (EOT) to the hardware. Since then, a multitude of tape formats have been used, but common features emerge. In a typical format, data is written to tape in blocks with inter-block gaps between them, and each block is written in a single operation with the tape running continuously during the write. However, since the rate at which data is written or read to the tape drive is not deterministic, a tape drive usually has to cope with a difference between the rate at which data goes on and off the tape and the rate at which data is supplied or demanded by its host. Various methods have been used alone and in combination to cope with this difference. A large memory buffer can be used to queue the data. The tape drive can be stopped, backed up, and restarted. The host can assist this process by choosing appropriate block sizes to send to the tape drive. There is a complex tradeoff between block size, the size of the data buffer in the record/playback deck, the percentage of tape lost on inter-block gaps, and read/write throughput. Tape has quite a long data latency for random accesses since the deck must wind an average of 1/3 the tape length to move from one arbitrary data block to another. Most tape systems attempt to alleviate the intrinsic long latency using either indexing, whereby a separate lookup table is maintained which gives the physical tape location for a given data block number, or marking, whereby a tape mark that can be detected while winding the tape at high speed is written to the tape. Most tape drives now include some kind of data compression. There are several algorithms which provide similar results: LZ (Most), IDRC (Exabyte), ALDC (IBM, QIC) and DLZ1 (DLT). The actual compression algorithms used are not the most effective known today, and better results can usually be obtained by turning off the compression built into the device and using a software compression program instead. Tape remains a viable alternative to disk due to its higher bit density and lower cost per bit. Tape has historically offered enough advantage in these two areas above disk storage to make it a viable product. The recent vigorous innovation in disk storage density and price, coupled with lessvigorous innovation in tape storage, has reduced the viability of tape storage products

Punched tape is an old-fashioned form of data storage, consisting of a long strip of paper in which holes are punched to store data.

A roll of punched tape

The earliest forms of punched tape come from weaving looms and embroidery, where cards with simple instructions about a machine's intended movements were first fed individually as instructions, then controlled by instruction cards, and later were fed as a string of connected cards. (See Jacquard loom). This led to the concept of communicating analog data not as a stream of individual cards, but one "continuous card", or a tape. Many professional embroidery operations still refer to those individuals who create the designs and machine patterns as "punchers", even though punched cards and paper tape were eventually phased out, after many years of use, in the 1990s. In 1846 Alexander Bain used punched tape to send telegrams. Punched tape was eventually also used as a way of storing messages for teletypewriters. The idea was to type in the message to the paper tape, and then send the message at "high speed" from the tape. The tape reader could "type" the message faster than a typical human operator, thus saving on phone bills. Text was encoded in two common standards, Baudot which had 5 holes and ASCII which had 7 or 8 holes. When the first business-oriented computers were being released many turned to the existing massproduced teletypewriter as a low-cost solution for printer output. This is why computers today still use ASCII. As a side effect the punched tape readers became a popular medium for low cost storage, and it was common to find a selection of tapes containing useful program in most computer installations. In the late 1960s to early 1970s, Teletype Corporation's ASR33 was a very popular model of teletype. It had a built in paper tape reader and tape punch (8 hole ASCII). It could print and read

or punch tape at the speed of 10 characters per second. The ASR33 tape reader was purely mechanical; 8 spring loaded fingers would be thrust into the tape (one character at a time) and an assortment of rods and levers would sense how high the finger rose, which told it if there was a hole in the tape at that position. Later on, photo readers that used light sensors could work in much higher speeds (hundreds of characters per second) and more sophisticated punches could run at somewhat higher speeds (Teletype's BRPE punch could run at 60 characters per second). "Wikipedia" in ASCII punched tape code (without a parity bit or with "spacing" parity) appears as follows (created by the BSD ppt program):
/\/\/\/\/| | . | | . | | o o .ooo| | oo o. o| | oo o. oo| | oo o. o| | ooo . | | oo .o o| | oo .o | | oo o. o| | oo . o| | o.o o| | o. o | | . | | . | |/\/\/\/\/

W i k i p e d i a Carriage Return Line Feed

The two biggest problems with paper tape were

Reliability. It was common practice to follow each mechanical copying of a tape with a manual hole by hole comparison. See also chad (the little pieces of paper punched out of the tape). Rewinding the tape was difficult and prone to problems. Great care was needed to avoid tearing the tape. Some systems used fanfold paper tape rather than rolled paper tape. In these systems, no rewinding was necessary nor were any fancy supply reel, takeup reel, or tension arm mechanisms required; the tape merely fed from the supply tank through the reader to the takeup tank, refolding itself back into the exact same form as when it was fed into the reader.


The punch card (or "Hollerith" card) is a recording medium for holding information for use by automated data processing machines. Made of stiff cardboard, the punch card represents information by the presence or absence of holes in predefined positions on the card. In the first generation of computing, from the 1920s into the 1950s, punch cards were the primary medium for data storage and processing. They were an important medium, particularly for data input, well into the 1970s, but are now long obsolete outside of a few legacy systems and specialized applications. Contents [hide] 1 Origins 2 Functional details 3 Other formats 4 Advantages 5 Obsolescence 6 Dimpled and hanging chads 7 See also

8 External links

The punched card predates computers considerably, having been originated by Joseph Jacquard in 1801 as a control device for the Jacquard looms. Such cards were also used as an input method for the primitive calculating machines of the late 19th century. The version by Herman Hollerith, patented on June 8, 1887 and used with mechanical tabulating machines in the 1890 U.S. Census, was a piece of cardboard about 90 mm by 215 mm, with round holes. This was the same size as the dollar bill of the time, so that storage cabinets designed for money could be used for his cards. The early applications of punched cards all used specificallydesigned card layouts. It wasn't until around 1928 that punched cards and machines were made "general purpose". In that year, punched cards were made a standard size, exactly 7-3/8 inch by 31/4 inch (187.325 by 82.55 mm), reportedly corresponding to the US currency of the day, though some sources characterise this assertion as urban legend. To compensate for the cyclical nature of the Census Bureau's demand for his machines, Hollerith founded the Tabulating Machine Company (1896) which was bought by Thomas J. Watson Sr., founder of IBM in 1914. IBM manufactured and marketed a wide variety of business machines and added the Hollerith card equipment to its line. The IBM 80-column punching format, with rectangular holes, eventually won out over the Univac 90-character format, which used 45 columns (2 characters in each) of 12 round holes. IBM (Hollerith) punched cards are made of smooth stock, .007 of an inch thick. There are about 143 cards to the inch thickness; a group of such cards is called a deck. Punch cards were widely known as just IBM cards.

Functional details
The method is quite simple: On a piece of light-weight cardboard, successive positions either have a hole punched through them or are left intact. The rectangular bits of paper punched out are called chads. Thus, each punch location on the card represents a single binary digit (or "bit"). Each column on the card contained several punch positions (multiple bits). The IBM card format, which became standard, held 80 columns of 12 punch locations each, representing 80 characters. Originally only numeric information was coded with 1 punch per column (digit[0-9]). Later, codes were introduced for upper-case letters and special characters. A column with 2 punches (zone[12,11,0] + digit[1-9]) was a letter, 3 punches (zone[12,11,0] + digit[1-7] + 8) was a special character, The introduction of EBCDIC in 1964 allowed columns with as many as 6 punches (zones[12,11,0,8,9] + digit[1-7]). Data was entered on a machine called a keypunch, which was like a large, very noisy typewriter. Often the text was also printed at the top of the card, allowing humans to read the text as well. This was done using a machine called an interpreter. Later model keypunches could do this as well.

Multi-character data, such as words or large numbers, was stored in adjacent card columns known as fields. For applications in which accuracy was critical, the practice was to have two different operators key the same data, with the second using a card-verifier instead of a card-punch. Verified cards would be marked with a rounded notch on the right end. Failed cards would be replaced by a key punch operator. There was a great demand for key-punch operators, usually women, who worked full-time on key punch and verifier machines. Electromechanical equipment (called unit record equipment) for punching, sorting, tabulating and printing the cards was manufactured. These machines allowed sophisticated data processing tasks to be accomplished long before computers were invented. The card readers used an electrical (metal brush) or, later, optical sensor to detect which positions on the card contained a hole. They had high-speed mechanical feeders to process around one hundred cards per minute. All processing was done with electromechanical counters and relays. The machines were programmed using wire patch panels.

Other formats
Other coding schemes, sizes of card, and hole shapes were tried at various times. Mark sense cards had printed ovals that humans would fill in with a pencil. Specialized card punches could detect these marks and punch the corresponding information into the card. There were also cards with all the punch positions perforated so data could be punched out manually, one hole at a time, with a device like a blunt pin with its wire bent into a finger-ring on the other end. In the early 1970s, IBM introduced a new, smaller, round-hole, 96-column card format along with the IBM System 3 computer. Aperture cards are a specialized use of punch cards for storing "blueprints". A drawing is photographed onto 35 mm film and the image is mounted in a window on the right half of the punch card. Information about the drawing, e.g. the drawing number, is punched in the left half. IBM punch cards could be used with early computers in a binary mode where every column was treated as a simple bitfield, and every combination of holes was permitted . In this binary mode, cards could be made in which every possible punch position had a hole: these were called "lace cards." For example, the IBM 700/7000 series scientific computers treated every row as two 36-bit words, in columns 1-72, ignoring the last 8 columns. Other computers, like the IBM 1130, used every possible hole.

In its earliest uses, the punch card was not just a data recording medium, but a controlling element of the data processing operation. Electrical pulses produced when the read brushes passed through holes punched in the cards directly triggered electro-mechanical counters, relays, and solenoids. Cards were inexpensive and provided a permanent record of each transaction. Large organizations had warehouses filled with punch card records.

One reason punch cards persisted into the early computer age was that an expensive computer was not required to encode information onto the cards. When the time came to transfer punch card information into the computer, the process could occur at very high speed, either by the computer itself or by a separate, smaller computer (e.g. an IBM 1401) that read the cards and wrote the data onto magnetic tapes or, later, on removable hard disks, that could then be mounted on the larger computer, thus making best use of expensive mainframe computer time.

Punched-card systems fell out of favor in the mid to late 1970s, as disk storage became cost effective, and interactive terminals meant that users could edit their work with the computer directly rather than requiring the intermediate step of the punched cards. However, their influence lives on through many standard conventions and file formats. The terminals that replaced the punched cards displayed 80 columns of text, for compatibility with existing software. Many programs still operate on the convention of 80 text columns, although strict adherence to that is fading as newer systems employ graphical user interfaces with variablewidth type fonts.

Dimpled and hanging chads

The term for the punched card area which is removed during a punch is chad. One notorious problem with a punched card system of tabulation is the incomplete punch; this can lead to a smaller hole than expected, or to a mere slit on the card, or to a mere dimple on the card. Thus a chad which is still attached to the card is a hanging chad. This technical problem was claimed by the Democratic Party to have influenced the 2000 U.S. presidential election; in the state of Florida, they said voting machines which used punched cards to tabulate votes generated improperly rendered records of several hundred votes, spread out over an entire state, which tipped the vote in favor of George W. Bush over Albert Gore. It is a debatable issue depending on which side of the political fence one is on. Some considered it to be a minor scandal that punch card-based voting machines have continued to be used over the next several years, including the 2004 U.S. presidential race. Others who have used the system for years without the slightest problem cannot understand how it could be such a issue.


Flash memory is a form of EEPROM that allows multiple memory locations to be erased or written in one programming operation. Contents [hide] 1 Overview 2 Principles of operation 3 History 4 Limitations 5 Flash file systems 6 External links


Normal EEPROM only allows one location at a time to be erased or written, meaning that flash memory can operate at higher effective speeds when the system uses it to read and write to different locations at the same time. All types of flash memory and EEPROM wear out after a certain number of erase operations, due to wear on the insulating oxide layer around the charge storage mechanism used to store data. Flash memory is non-volatile, which means that it stores information on a silicon chip in a way that does not need power to maintain the information in the chip. This means that if you turn off the power to the chip, the information is retained without consuming any power. In addition, flash memory offers fast read access times and solid-state shock resistance. These characteristics explain the popularity of flash memory for applications such as storage on battery-powered devices like cellular phones and PDAs. Flash memory is based on the Floating-Gate Avalanche-Injection Metal Oxide Semiconductor (FAMOS transistor) which is essentially an NMOS transistor with an additional conductor suspended between the gate and source/drain terminals. Flash memory is made in two forms: NOR flash and NAND flash. The names refer to the type of logic gate used in each storage cell. Flash memory is often used in MP3 players, digital cameras and mobile phones.

Principles of operation
Flash memory stores information in an array of transistors, called "cells," each of which traditionally stores one bit of information. Newer flash memory devices, sometimes referred to as multi-level cell devices, can store more than 1 bit per cell, by varying the number of electrons placed on the FG of a cell. In NOR flash, each cell looks similar to a standard MOSFET transistor, except that it has two gates instead of just one. One gate is the control gate (CG) like in other MOS transistors, but the second is a floating gate (FG) that is insulated all around by an oxide layer. The FG is between the CG and the substrate. Because the FG is isolated by its insulating oxide layer, any electrons placed on it get trapped there and thus store the information. When electrons are on the FG, they modify (partially cancel out) the electric field coming from the CG, which modifies the threshold voltage (Vt) of the cell. Thus, when the cell is "read" by placing a specific voltage on the CG, electrical current will either flow or not flow, depending on the Vt of the cell, which is controlled by the number of electrons on the FG. This presence or absence of current is sensed and translated into 1's and 0's, reproducing the stored data. In a multi-level cell device, which stores more than 1 bit of information per cell, the amount of current flow will be sensed, rather than simply the presence or absence of current, in order to determine the number of electrons stored on the FG. A NOR flash cell is programmed (set to a specified data value) by starting up electrons flowing from the source to the drain, then a large voltage placed on the CG provides a strong enough

electric field to suck them up onto the FG, a process called hot-electron injection. To erase (reset to all 1's, in preparation for reprogramming) a NOR flash cell, a large voltage differential is placed between the CG and source, which pulls the electrons off through Fowler-Nordheim tunneling, a quantum mechanical tunneling process. Most modern NOR flash memory components are divided into erase segments, usually called either blocks or sectors. All of the memory cells in a block must be erased at the same time. NOR programming, however, can generally be performed one byte or word at a time. NAND Flash uses tunnel injection for writing and tunnel release for erasing. NAND flash memory forms the core of the removable USB interface storage devices known as keydrives.

NOR flash was the first type to be developed, invented by Intel in 1988. It has long erase and write times, but has a full address/data (memory) interface that allows random access to any location. This makes it suitable for storage of program code that needs to be infrequently updated, such as a computer's BIOS or the firmware of set-top boxes. Its endurance is 10,000 to 100,000 erase cycles. NOR-based flash is the basis of early flash-based removable media; Compact Flash was originally based on it, though later cards moved to the cheaper NAND flash. NAND flash from Samsung and Toshiba followed in 1989. It has faster erase and write times, higher density, and lower cost per bit than NOR flash, and ten times the endurance. However its I/O interface allows only sequential access to data. This makes it suitable for mass-storage devices such as PC cards and various memory cards, and somewhat less useful for computer memory. The first NAND-based removable media format was SmartMedia, and numerous others have followed: MMC, Secure Digital, Memory Stick and xD-Picture Cards.

A frequently asked question is why flash memory is not used to replace DRAM in computers so that memory contents would not be lost when they are turned off or power is lost. The limitation of flash memory is that while it can be read or programmed a byte or a word at a time in a random access fashion, blocks of memory must be erased all at the same time. To explain further, flash components are generally subdivided into a number of segments called blocks. Starting with a freshly erased block, you can program any bytes within that block. However, once a byte has been programmed, it cannot be changed again until it is erased, which has to be done a block at a time. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations, but cannot offer random-access rewrite or erase operations. Thus, many applications of DRAM that involve overwriting a specific address location quickly cannot be easily implemented on flash memory. In addition, DRAM is generally cheaper than flash memory on a cost per bit basis. It should be noted, however, that the Tungsten T5 PDA and Treo 650 smartphone from PalmOne, released in late 2004, provide essentially this exact feature. A DRAM cache over a FAT filesystem

stored on internal NAND flash is used to emulate the older Palm type of directly-addressable powered DRAM. This technique is known as Nonvolatile Filesystem or NVFS and gives the illusion of a seamless RAM storage pool that does not lose any of its data when switched off. [1] (,Kb=PalmSupportKB)

Flash file systems

Because of the particular characteristics of flash memory, it is best utilised with specificallydesigned file systems which spread writes over the media and deal with the long erase times of NOR flash blocks. The basic concept behind flash file systems is: When the flash store is to be updated, the file system will write a new copy of the changed data over to a fresh block, remap the file pointers, then erase the old block later when it has time. JFFS was the first of these file systems, quickly superseded by JFFS2, originally developed for NOR flash. Then YAFFS was released in 2003, dealing specifically with NAND flash, and JFFS2 was updated to support NAND flash too. However, in practice most flash media is used with the old FAT filesystem for compatibility purposes.