Raster Scan
In a raster scan system, the electron beam is swept
across the screen, one row at a time from top to bottom.
As the electron beam moves across each row, the beam
intensity is turned on and off to create a pattern of
illuminated spots.
Each screen point is referred to as a pixel (picture element) or pel. At the end of each scan line, the
electron beam returns to the left side of the screen to begin displaying the next scan line.
Computer graphics user interfaces (GUIs) − A graphic, mouse-oriented paradigm which allows the user to
interact with a computer.
Business presentation graphics − "A picture is worth a thousand words".
Cartography − Drawing maps.
Weather Maps − Real-time mapping, symbolic representations.
Satellite Imaging − Geodesic images.
Photo Enhancement − Sharpening blurred photos.
Medical imaging − MRIs, CAT scans, etc. - Non-invasive internal examination.
Engineering drawings − mechanical, electrical, civil, etc. - Replacing the blueprints of the past.
Typography − The use of character images in publishing - replacing the hard type of the past.
Architecture − Construction plans, exterior sketches - replacing the blueprints and hand drawings of the past.
Art − Computers provide a new medium for artists.
Training − Flight simulators, computer aided instruction, etc.
Digitize
Digitize means to translate into a digital form. For example, optical scanners digitize images by
translating them into bit maps. It is also possible to digitize sound, video, and any type of movement. In
all these cases, digitization is performed by sampling at discrete intervals. To digitize sound, for example,
a device measures a sound wave's amplitude many times per second. These numeric values can then be
recorded digitally.
What is bitmap?
A bitmap is an image or shape of any kind-a picture, a text character, a photo-that's composed of a
collection of tiny individual dots. A wild landscape on your screen is a bitmapped graphic, or simply a
bitmap. Remember that whatever you see on the screen is composed of tiny dots called pixels. When you
make a big swipe across the screen in a paint program with your computerized "brush," all that really
happens is that you turn some of those pixels on and some off. You can then edit that bitmapped swipe
dot by dot; that is, you can change any of the pixels in the image. Bitmaps can be created by
a scanner, which converts drawings and photographs into electronic form, or by a human artist (like you)
working with a paint program.
A computer screen is made up of thousands of dots of light, called pixels (short for picture elements). A
single pixel is composed of up to three rays of light, red, blue, and green, blended into a single dot on-
screen. By combining these rays and changing their intensity, virtually any color can be displayed on-
screen. The number of bits required to display a single pixel onscreen varies by the total number of colors
a particular monitor can display. The larger the number of possible colors, the larger the number of bits'
required to describe the exact color needed. Regardless of the actual number of bits required, a bit map
is a series of these bits stored in memory, which form a pattern when read left to right, top to bottom.
When decoded by the computer and displayed as pixels on-screen, this pattern forms the image of a
picture.
The simplest bitmaps are monochrome, which have only one color against a background. For these, the
computer needs just a single bit of information for each pixel (remember, a bit is the smallest unit of data
the computer recognizes). One bit is all it takes to turn the dot off (black) or on (white). To produce the
image you see, the bits get "mapped" to the pixels on the screen in a pattern that displays the image.
In images containing more than black and white, you need more than one bit to specify the colors or
shades of gray of each dot in the image. Multicolor images are bitmaps also. An image that can have
many different colors or shades of gray is called a "deep bitmap," while a monochrome bitmap is known
as a "bilevel bitmap." The "depth" of a bitmap-how many colors or shades it can contain - has a huge
impact on how much memory and/or disk space the image consumes. A 256-color bitmap needs 8 times
as much information, and thus disk space and memory, as a monochrome bitmap.
The resolution of a bitmapped image depends on the application or scanner you use to create the image,
and the resolution setting you choose at the time. It's common to find bitmapped images with resolutions
of 72 dots per inch (dpi), 144 dpi, 300 dpi, or even 600 dpi. A bitmap's resolution is permanently fixed-
a bitmapped graphic created at 72 dpi will print at 72 dpi even on a 300 dpi printer such as the
LaserWriter. On the other hand, you can never exceed the resolution of your output device (the screen,
printer, or what have you); even though you scanned an image at 600 dpi, it still only prints at 300 dpi
on a LaserWriter, since that's the LaserWriter's top resolution.
You can contrast bitmapped images with vector or object-oriented images, in which the image is
represented by a mathematical description of the shapes involved. You can edit the shapes of an object
graphic, but not the individual dots. On the other hand, object-oriented graphics are always displayed or
printed at the maximum resolution of the output device. But keep in mind that an object-oriented graphic
is still displayed as a bitmap on the screen.
Bit-mapped fonts and bit-mapped graphics use pixels to form pictures or letters. However, because of
the number of bits required to encode a single pixel, bit-mapped fonts and graphics consume a great deal
of memory. In comparison, trying to create a perfect circle by coloring the squares on a piece of graph
paper demonstrates the problems inherent with this method of displaying text and graphics. Because a
computer screen is layed out in a grid of dots (pixels) like graph paper, a distortion will show up along
the angled and curved lines in an image. This distortion is called "jaggies" or "aliasing."
Pixel Resolution
The term resolution is often used as a pixel count in digital imaging, even though American, Japanese,
and international standards specify that it should not be so
used, at least in the digital camera field. An image of N pixels
high by M pixels wide can have any resolution less than N
lines per picture height, or N TV lines. But when the pixel
counts are referred to as resolution, the convention is to
describe the pixel resolution with the set of two positive
integer numbers, where the first number is the number of pixel
columns (width) and the second is the number of pixel rows
(height), for example as 640 by 480.
Another popular convention is to cite resolution as the total
number of pixels in the image, typically given as number of
mega pixels, which can be calculated by multiplying pixel
columns by pixel rows and dividing by one million. Other
conventions include describing pixels per length unit or pixels
per area unit, such as pixels per inch or per square inch. None of these pixel resolutions are true
resolutions, but they are widely referred to as such; they serve as upper bounds on image resolution.
Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels
were poorly rendered as sharp squares (normally, a smooth image reconstruction from pixels would be
preferred, but for illustration of pixels, the sharp squares make the point better).
Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch) or to the overall size of a
picture (lines per picture height, also known simply as lines, or TV lines). Furthermore, line pairs are
often used instead of lines. A line pair is a pair of adjacent dark and light lines, while a line counts both
dark lines and light lines. A resolution of 10 lines per mm means 5 dark lines alternating with 5 light
lines, or 5 line pairs per mm. Photographic lens and film resolution are most often quoted in line pairs
per mm.
32-bit colour
On a color monitor, each pixel has three dots arranged in a triad-red, green, and one blue dot. Each dot
can deal with a maximum of 8 bits, which makes a total of 24 bits per pixel. With the possibility of
combining the 256 levels of color in each of the three color dots, 24-bit color gives you the awesome
potential of 16.7 million colors on your screen (256 times 3). Many of these colors differ so slightly that
even the most acute observer couldn't tell the difference between them. Simply stated: 16 million colors
is more than enough. (How do you get black and white if there are three colored dots? If all dots are on,
the pixel is white; if all dots are off, the pixel is black.)
Now, you will often hear of 32-bit color, which there isn't, really. Those other 8 bits don't offer any extra
color, but they do offer the capacity for masking and channeling.
DPI
DPI resolution refers to the clarity of an image due to the number of dots of ink that make up a picture
that is printed on paper. The term DPI (dots per inch) is one measure of resolution. Properly used, DPI
refers only to the resolution of a printer, describing how many dots of ink will be physically applied to a
piece of paper per square inch.
Dots, Pixels, or Something Else?
Other initials you will encounter that refer to resolution are PPI (pixels per inch), SPI (samples per
inch), and LPI (lines per inch). While all of these acronyms describe resolution, they are all describing
resolution to specific items.
1. PPI (pixels per inch) describes the number of actual pixels per inch displayed on a desktop screen,
monitor, TV, etc.
2. SPI (samples per inch) refers specifically to the number of samples taken in one linear inch in a scanner.
3. LPI (lines per inch) describes the distance that printed lines (being made up of individual dots) are from
each other. This term used generally only used in commercial printing.
Whether printed on paper or displayed on your computer screen, a picture is made up of tiny little dots.
There are color dots and there are black dots. In black & white printing, the size and shape of the black
dots and how close or far apart they are printed creates the illusion of shades of gray.
The more little dots that are used (up to a point) the clearer the picture.
The more dots in a picture, the larger the size of the graphic file.
Resolution is measured by the number of dots in a horizontal by vertical (or square) inch.
Each type of display device (scanner, digital camera, printer, computer monitor) has a maximum number
of dots it can process and display no matter how many dots are in the picture.
A 600 DPI laser printer can print up to 600 dots of picture information in an inch. The number of pixels
per inch displayed on a computer monitor can vary depending on the size of the display. Typically, 13-
inch displays have over 200 PPI.
When a picture has more dots than the display device can support, those dots are wasted. They increase
the file size but don't improve the printing or display of the picture, because the resolution is too high
for that device.
A photograph scanned at both 300 SPI and at 600 SPI will look the same printed on a 300 DPI laser
printer. The extra dots of information are "thrown out" by the printer but the 600 DPI picture will have
a larger file size on the computer that it is saved to.
When a picture has fewer dots than the display device can support, the picture will not be very clear or
sharp. If you print a 72 PPI picture to a 600 DPI printer, it won't usually look as good as it does on the
computer monitor. The printer doesn't have enough dots of information to create a clear, sharp image.
(However, today's inkjet home printers do a pretty decent job of making low-resolution images
look good enough much of the time.)
An image appears on screen when electron beams strike the surface of the screen in a zig-zag pattern. A
refresh rate is the number of times a screen is redrawn in one second and is measured in Hertz (Hz).
Therefore, a monitor with a refresh rate of 85 Hz is redrawn 85 times per second. A monitor should be
"flicker-free meaning that the image is redrawn quickly enough so that the user cannot detect flicker, a
source of eye strain. Today, a refresh rate of 75 Hz or above is considered to be flicker-free
The number of horizontal lines on the screen depends upon the monitor's resolution. If a monitor is set
to a resolution of 1024 x 768 then there are 768 horizontal lines (1024 is the number of pixels on one
line). For a monitor set to a 1280 x 1024 resolution, there are 1024 horizontal lines.
Additionally, the time it takes for the electron beam to return to the top of the screen and begin scanning
again must be taken into account. This is roughly 5% of the time it takes to scan the entire screen.
Therefore, the total is multiplied by 0.95 to calculate the maximum refresh rate.
Interlacing
Early monitor design were based on TV standards, in which alternate rows of pixels are scanned in turn.
This is acceptable in moving images but results in screen flicker in static image. Non-interlaced displays
are normally used in the graphic arts, although, in some systems, the higher refresh rates and addressable
resolutions are achieved by interlacing the display.
Since, a color monitor is an analog device, the colours it can display very continuously between the
minimum and the maximum luminance levels for each color. The constrain on address ability and color
depth are the limitations of the graphics board and the driver software responsible for converting the
digital image to analog voltages, rather than the monitor itself. A good graphics board will utilize the
monitor to the best effect, speeding up screen redraws and enabling more accurate color judgments to be
made.
Storage device
Alternatively referred to as digital storage, storage, storage media, or storage medium, a storage
device is any hardware capable of holding information either temporarily or permanently.
There are two types of storage devices used with computers: a primary storage device, such as
RAM, and a secondary storage device, like a hard drive. Secondary storage can be removable, internal,
or external.
Today, magnetic storage is one of the most common types of storage used with computers. This
technology found mostly on extremely large HDDs or hybrid hard drives.
Floppy diskette
Hard drive
Magnetic strip
SuperDisk
Tape cassette
Zip diskette
CD-ROM
Another common storage is optical storage, which uses lasers and lights as its method of reading and
writing data.
Blu-ray disc
CD-ROM disc
CD-R and CD-RW disc.
DVD-R, DVD+R, DVD-RW, and DVD+RW disc.
Flash memory has replaced most magnetic and optical media as it becomes cheaper because it is the more
efficient and reliable solution.
Storing data online and in cloud storage is becoming popular as people need to access their data from
more than one device.
Cloud storage
Network media
Paper storage
Early computers had no method of using any of the above technologies for storing information and had
to rely on paper. Today, these forms of storage are rarely used or found.
OMR
Punch card
Question bank
1. Draw the block diagram of digital work for system and explain the function of each individual
system?
2. Draw the diagram of desktop processor and show the various elements
3. What are the basic functions microprocessor?
4. What are the raster scan and random scan display system?
5. What is pixel and resolution?
6. Explain the working principle of various storage systems
7. What are the performance criteria of various storage?