Anda di halaman 1dari 135

YO U R G U I D E TO S

SUC
UCCE
ESS S A N D ACCO
ACC O M
MPLI
PLI SHM
SH MEN
ENTT

STUDY GUIDE

Cert-SK0-002
CompCert: Server +

Certification Exam Preparation

www.transcender.com/club
CompTIA SK0-002 Study Guide

CompTIA SK0-002 Study Guide

© 2006 Transcender, a Kaplan IT Company. All rights reserved. No part of this study
guide may be used or reproduced in any manner whatsoever without written permission
of the copyright holder. The information contained herein is for the personal use of the
reader and may not be incorporated in any commercial programs, other books,
databases, or any kind of software without written consent of the publisher. Making
copies of this study guide or any portion for any purpose other than your own is a
violation of United States Copyright laws.

Information has been obtained by Transcender from sources believed to be reliable.


However, because of the possibility of human error by our sources, Transcender, or
others, Transcender does not guarantee the accuracy, adequacy, or completeness of any
information in this study guide and is not responsible for any errors or omissions or the
results obtained from use of such information.

CompTIA® is a registered trademark of CompTIA in the United States and/or other


countries.

Transcender
500 Northridge Road
Suite 240
Atlanta, GA 30350
www.transcender.com

1-866-639-8765 U.S. & Canada


678-277-3200

Publication ID: tsgsk00021

www.transcender.com

2
CompTIA SK0-002 Study Guide

Contents

Contents......................................................................................................................... 3
About Your Transcender Study Guide .......................................................................................................... 4
General Server Hardware Knowledge ......................................................................... 5
System Bus Architectures ............................................................................................................................. 6
Server Types............................................................................................................................................... 11
Memory ....................................................................................................................................................... 18
Small Computer System Interface, and AT Attachment ............................................................................. 21
Features of Fiber Channel Hardware, Internet SCSI, and Fiber Channel over Internet Protocol .............. 30
RAID............................................................................................................................................................ 32
Server Fault Tolerance, Scalability, Availability, and Clustering................................................................. 36
Processor Subsystem ................................................................................................................................. 38
Storage Area Network and Network Attached Storage .............................................................................. 39
Review Checklist: General Server Hardware Knowledge........................................................................... 40
Installation and Configuration of Server Components ............................................ 41
Server Installation ....................................................................................................................................... 42
Ethernet, Cable Types and Connectors, Cable Management, and Rack Mount Security.......................... 46
Power On Self-Test and Server Room Environment .................................................................................. 54
Review Checklist: Installation and Configuration of Server Components................................................... 58
Configuration............................................................................................................... 59
Firmware and RAID Configuration .............................................................................................................. 60
UPS and External Device Installation ......................................................................................................... 63
NOS and Service Tool Installation .............................................................................................................. 65
Server Baseline and Management.............................................................................................................. 68
Review Checklist: Installation and Configuration of Server Components................................................... 71
Upgrading Server Components ................................................................................. 72
Performing Backups and Adding Processor ............................................................................................... 73
Adding Hard Drives and Memory................................................................................................................ 76
Upgrading BIOS/firmware, Adapters, and Peripheral Devices ................................................................... 79
Upgrading System Monitoring Agents, Service Tools, and a UPS ............................................................. 84
Review Checklist: Upgrading Server Components..................................................................................... 90
Proactive Maintenance ............................................................................................... 91
Proactive Maintenance................................................................................................................................ 92
Review Checklist: Proactive Maintenance .................................................................................................. 97
Server Environment .................................................................................................... 98
Review Checklist: Server Environment..................................................................................................... 105
Troubleshooting and Problem Determination ........................................................ 106
Troubleshooting and Problem Determination ........................................................................................... 107
Review Checklist: Troubleshooting and Problem Determination.............................................................. 122
Disaster Recovery ..................................................................................................... 123
Disaster Recovery..................................................................................................................................... 124
Review Checklist: Disaster Recovery ....................................................................................................... 131
Test Taking Strategies............................................................................................................................ 132

www.transcender.com

3
CompTIA SK0-002 Study Guide

About Your Transcender Study Guide


IT professionals agree! Transcender has consistently been voted the industry's #1 practice
exam. This Study Guide complements your TranscenderCertTM practice exam.

The Study Guide is objective-driven and contains a variety of tools to help you focus your study
efforts. Each Study Guide contains structured sections to help you prepare for your certification
exam:
™ Scope :: identifies the learning objectives for each section
™ Focused Explanation :: provides definitions, in-depth discussions and examples
™ Review Checklist :: highlights the key learning points at the end of each major
section

Additional sections to further assist you are located at the end of each Study Guide:
™ Test Taking Strategies
™ General Tips
™ Explanation of Test Item Types
The following study model will help you optimize your study time.

Prepare
Develop a Assess your Focus on Track your To
Study Plan knowledge weak areas progress Pass

• Assess your current • Read the Study Guide • Take a Transcender


• Start early, at least 6
knowledge level by objective practice exam using
weeks out
• Take a Transcender • Use the practice exam Preset Experience
• Don’t try to cram
practice exam using in Optimize again
• Set aside specific study
Preset Experience Experience mode • If you didn’t score
times
• The objective-based • Study the test items 100%, go back to your
• Use a disciplined study plan and focus on
score report shows by objective
• approach so you can you the areas where weak areas
• Use the included
thoroughly prepare you are strong and • Study those objective
TranscenderFlash
• Stick to your plan the areas where you areas where you didn’t
cards to review key
need to focus your concepts score 100%
study efforts • Use your favorite • Keep practicing until
references to get you consistently score
further information on 100% in all areas
complex material

Transcender’s commitment to product quality, to our team and to our customers continues to
differentiate us from other companies. Transcender uses an experienced team of certified
subject-matter experts, technical writers, and technical editors to create and edit the most in-
depth and realistic study material. Every Transcender product goes through a rigorous, multi-
stage editing process to ensure comprehensive coverage of exam objectives. Transcender
study materials reinforce learning objectives and validate knowledge so you know you’re
prepared on exam day.

www.transcender.com

4
CompTIA SK0-002 Study Guide

General Server Hardware Knowledge

www.transcender.com

5
CompTIA SK0-002 Study Guide

TSystem Bus Architectures


Scope
• Know the characteristics, purpose, function, limitations, and performance of system bus architectures.

• Know the characteristics of adapter fault tolerance.

Focused Explanation
A computer bus is a set of connections that carry similar signals and allows the capability of the computer
to be enhanced. There are a variety of system buses: each characterized by an electrical and mechanical
specification and each providing one or more bus connectors. Normally, a compatible expansion adapter
is plugged into the bus connector to provide a capability boost to the computer. Typical expansion
adapters include sound cards, network interface cards, and bus interface cards that provide Small
Computer System Interface (SCSI), USB, Integrated Drive Electronics (IDE) interfaces, and so on.

The important bus architectures from the viewpoint of a server are Enhanced Industry Standard
Architecture (EISA) and Peripheral Connect Interface (PCI).

Enhanced Industry Standard Architecture

Enhanced Industry Standard Architecture (EISA) is an old system bus that is based on an even older bus
called Industry Standard Architecture (ISA). EISA is a 32-bit bus, which implies that EISA can transfer 32
bits of data at a time. The bus terminates in a set of expansion connectors on the motherboard of a
server. Adapter boards, such as NICs, display adapters, or SCSI adapters, can be plugged into the
expansion connectors. Expansion connectors are often called slots because adapter boards are slotted
into the connector.

Note: The terms bus connector, expansion connector, expansion slot, and slot refer to the same thing.

EISA slots operate at 8.33 MHz. This provides compatibility with older ISA boards. The electrical
specifications of EISA also allow you to plug ISA boards into EISA slots. They provide a bandwidth of
about 32 MBps, allow sharing of interrupts, and allow bus mastering.

EISA is configured using software provided by the motherboard vendor. EISA is an obsolete bus.
Although you may still come across an operational server that has EISA slots, no EISA boards are
currently included in newly manufactured computers.

www.transcender.com

6
CompTIA SK0-002 Study Guide

Peripheral Connect Interface

Peripheral Connect Interface (PCI) is a newer bus that has become the dominant industry standard, and
currently all motherboards provide PCI expansion slots. Compared to EISA, PCI slot connectors are
faster and physically smaller in size. Even a modern motherboard equipped with PCI slots is likely to
contain an internal, invisible ISA bus. This internal ISA bus connects to slow peripherals such as the
keyboard, floppy drive, and other legacy ports and interfaces that will not benefit from the fast speeds
provided by PCI.

Internally, the PCI bus connects to the processor-memory subsystem through a bridge on one side and
connects to the legacy internal ISA bus on the other.

The PCI bus will expose a set of interface connectors or slot connectors on the motherboard. Most
servers will have several slot connectors. The slot connectors are oriented such that an expansion board
can be inserted and any ports or connectors on the expansion board will be available to the outside world.

The PCI bus is available in 32-bit, or 4 bytes, and 64-bit, or 8 bytes, widths. This amount indicates the
number of bits of data transferred as a single unit, that is, the word-size of the bus. The PCI bus operates
at 33 MHz and 66 MHz, going up to 250 MHz in the latest specifications. Typical data transfer rates are
133 Mbytes for a 32 bit/33 MHz bus and 533 Mbytes for a 64 bit/66 MHz bus.

Bus Mastering

PCI supports bus mastering. Devices on a PCI bus can take control of the bus and directly exchange data
without involving the CPU in the data transfer. This serves to reduce the load on the CPU during data
transfers. The device initiating the bus master transfer is the initiator or bus master. The target device for
the transfer is the target or bus slave. During bus master transfers, the bus master takes exclusive control
of the PCI bus, bypassing the CPU. After transferring data at a high speed, the bus master relinquishes
control of the bus and allows the CPU to regain control of the bus.

In general, the first two or three PCI slots found on a motherboard are bus-mastering slots. In most cases,
these slots are reserved for high performance adapters that can benefit from bus mastering.

PCI Interrupts

Devices on a PCI bus use interrupts. An interrupt request (IRQ) is a hardware signal to the CPU that
results in the CPU suspending current activity and executing the code associated with the IRQ on priority.
The code associated with the IRQ is called the IRQ service routine. A computer has a limited number of
IRQs that are allocated to devices needing IRQs for operation.

PCI uses its own internal interrupts which are usually numbered #A through #D. Usually four IRQs, IRQs
9, 10, 11, and 12, are mapped to the internal PCI interrupts. If more than four PCI devices are used, then
multiple PCI interrupts can be mapped to a single IRQ to be shared by the devices.

Note: In a motherboard IRQ 9 through 15 are normally available for assigning to devices. Four of these
are used for PCI.

www.transcender.com

7
CompTIA SK0-002 Study Guide

Hierarchical and Peer PCI buses

A server need not use a single PCI bus. In practice, more than one PCI bus is provided so that a larger
number of slots are available. This allows a larger number of plug-in expansion boards to be used on the
computer.

Multiple PCI buses are provided as a set of hierarchical or peer PCI buses.

In a hierarchical PCI bus, a bridge connects the primary PCI bus to the CPU-memory subsystem. The
secondary PCI bus is connected to the primary PCI bus. Any PCI device-to-CPU transfers on the
secondary PCI bus must flow through the primary PCI bus. This could lead to a bottleneck at the primary
PCI bus. For bus master transfers where the initiator and target are both in the primary PCI bus, or both in
the secondary PCI bus, there is no degradation of performance. The general rule to improve performance
in a hierarchical PCI bus computer is to use the primary PCI bus, and move on to the secondary PCI bus
after the first bus is populated with devices.

In the peer PCI bus, the PCI buses are linked independently. Traffic in one bus will not affect the traffic in
the other. In fact, load balancing can be achieved by dividing the data transfer load between the two
buses. For example, a technique called adapter teaming uses two network interface cards (NICs) to
improve the total network traffic throughput of a server through load balancing. Using one NIC in the first
PCI bus and one NIC in the second PCI bus would improve performance.

Note: Adapter teaming works best with switched networks where the entire bandwidth of the network is
available to the teamed adapters. All adapters use the same IP address.

The general rule to improve performance in a peer PCI bus computer is to spread the load between the
buses.

Peer PCI is more expensive to implement but offers better performance.

PCI Hot Swap

Hot swap refers to devices that can be removed from a system without having to power down the system.
The system in question is most often a server, although it can be a RAID array or similar device.
Examples of hot-swap devices are hard disks, CD-ROM drives, and power supply units. A hot swap
device may be safely connected if the adapter that drives the device supports hot swapping. Such
adapters allow the bus to be disconnected between the device and the computer so that the device can
be connected or disconnected without switching transients affecting the computer.

Note: Switching transients are surges and dips of current that occur when any electrical device is started
or stopped.

The sophistication of hot swap varies from implementation to implementation. In some implementations, a
command must be executed from the keyboard before swapping. This informs the operating system of
the impending hot swap. Some implementations provide a hardware-switch beside the PCI slot. Pressing
the switch signals the operating system of the impending hot swap. Still other implementations allow you

www.transcender.com

8
CompTIA SK0-002 Study Guide

to install redundant hardware. If one item fails, hot swap powers down the failed device and powers up
the stand by device. This is the highest level of sophistication in hot swap.

PCI Hot Plug

PCI hot plug technology is similar to hot swap. Using a switch on the hot plug slot turns off the power to
slots. This enables an adapter to be plugged in. Using the switch again signals the computer to power up
the slot.

In general, the technology for hot plug is built into the PCI slots and chipset of the computer. The adapter
that is being hot plugged can be a normal PCI adapter. Hot-swap components, in contrast, are specially
manufactured to withstand the rigors of being plugged into and plugged out from live slots. PCI hot plug is
an industry standard.

PCI, PCI-X, and PCI-Express

The PCI bus has evolved over time and is governed by PCI specifications released periodically. The
original PCI 1.0 specification was released in 1992. Thereafter, new versions were released periodically
that provided different features and upgrades. In 1999, PCI-X (short for PCI extended) 1.0 was released,
defining 133 MHz speeds and other improvements. PCI-X 2.3 was released in 2002, defining 266 and
533 MHz speed and more efficient protocols. Designed to replace PCI-X, PCI-Express 1.0 defined a
packet transfer bus operating from 250 Mbps to 8000 Mbps. It is also interoperable with older PCI

Table 1-1 lists the speeds of the different PCI standards.

PCI Standard Bus Width (bits) Bus Speed (MHz) Transfer Rate Bus Speed
(Mtransfers/s) (MBytes/s)

PCI 32 32 33 MHz 33 133

PCI 66 MHz 32 66 MHz 66 266

PCI 64 64 33 MHz 33 266

PCI 64/66 MHz 64 66 MHz 66 533

PCI-X 66 64 66 MHz 66 533

PCI-X 133 64 133 MHz 133 1066

PCI-X 266 64 133 MHz 266 2132

PCI-X 533 64 133 MHz 533 4266

PCI-Express 1 to 32 (lanes – see 250 MHz (Per lane) 250 (Per lane) 250 to 8000
below)

Table 1-1: PCI Bus Versions and Characteristics

www.transcender.com

9
CompTIA SK0-002 Study Guide

PCI-X buses are able to subdivide bus bandwidth and share it among various PCI devices. If both PCI
and PCI-X cards are on the same bus, the bus speed will decrease to the speed of the slowest card.

PCI-Express buses define links and lanes. In PCI-Express, the logical connection between a PCI device
and the PCI bus is called a link. A single link is composed of one or more lanes. A lane is a one bit full-
duplex connection: the lane can carry one bit of information in both directions at once. Information
transferred through a lane is converted into packets and streamed through the lane. Depending on the
implementation of PCI-Express, a link can use between one and 32 lanes. A single lane link provides a
bandwidth of 250 Mbps, and a 32-lane link provides 8000 Mbps of bandwidth.

Intelligent I/O

Intelligent I/O (I2O) specification was designed to speed up I/O operations on computers by implementing
a standard messaging format between devices. I2O specified three layers of software. The OS Services
Module (OSM) mediated between the operating system of the computer and the next layer of I2O. This
layer abstracted the computer operating system for I2O and thus provided operating system
independence. The OSM was custom written for the target operating system. The I2O messaging layer,
which mediated between the OSM and the next I2O layer, provided device independence. The Hardware
Device Module is particular to specific hardware but is independent of the operating system.

While there was lot of interest with the creation of the I2O Special Interest Group, I2O proved a non-
starter. Now with the I2O Special Interest Group disbanded, I2O is considered completely dead.

Adapter Fault Tolerance, Teaming, and Load Balancing

The failure of a NIC in a server is a major event. A server is a critical component of a network. A failed
NIC does not allow the server to function in a network. Some manufacturers offer fault tolerant solutions
in the form of NICs that have several ports and special drivers. In case of a fault in a port, the driver
automatically switches over to another port, in a manner that is transparent to the operating system. This
mechanism is sometimes called resilient server links.

Adapter teaming refers to the use of multiple NICs grouped together. The group uses a single IP address
and sometimes a single MAC address. The combined throughput of the NICs is the sum of the data rates
of the team. The end result of load balancing is an improved total throughput for the server.

www.transcender.com

10
CompTIA SK0-002 Study Guide

Server Types
Scope
• Identify the basic purpose and function of various types of servers.

Focused Explanation
Characteristics of Servers

Within an organization, servers perform different roles. For instance, servers can serve as a gateway, a
file server, or a database server. Each of these roles requires a different functionality, which is provided
by the software loaded on the server and the configuration of the server.

Server as a Gateway or Router

A gateway is a device that segregates two networks in terms of traffic. In the most basic sense, a
gateway serves as a routing gateway (or more simply, a router), where it connects two networks (or
network segments). Traffic meant for one network will not cross over to the other network, and thus
reduce the traffic in the second network. If the networks were connected directly, without a router, the two
networks would have a common set of network traffic, which would be needlessly high.

A routing gateway works by looking at the IP addresses on packets, operating at layer 3 of the OSI
model. A server acting as a gateway uses two NICs, one for each network connected to the gateway.
Each NIC will need settings appropriate to the network to which it connects. The operating system will
have to be configured to allow the server to act as a routing gateway.

Note: A computer equipped with multiple NICs is known as a multi-homed host.

A gateway can also serve as a protocol gateway. In this case, it would connect two networks that use
different underlying protocols. An instance of this would be an IP-to-IPX gateway, which connects an IP
network to an IPX network. The NICs will have to be appropriately configured, and the operating system
will have to be configured to provide the protocol gateway services.

Server as a Bridge

A server working as a bridge is used for segregating traffic between network segments. This is similar to
the function of a routing gateway, except that a bridge works at layer 2 of the OSI model, where IP
addresses do not exist. Bridging decisions are taken on the basis of hardware addresses of sender and
target on each packet. A bridge will transfer packets across to another network segment only if the
recipient, or target, of the packet is on the other network segment. If the target of the packet is on the
same network segment as the source, or sender, then the gateway does not allow the packet to cross.
The operating system must be appropriately configured or additional applications must be loaded on a
server to provide bridging services.

Note: A hardware address is more correctly called the media access control (MAC) address.

www.transcender.com

11
CompTIA SK0-002 Study Guide

Firewall Server

A firewall server protects an internal network from potential threats from an external network. When an
internal network is connected to an external network, there is a chance that people and programs with
malicious intent will invade the internal network and cause damage. A firewall between the two networks,
at the point of connection serves to protect the internal network.

Three broad types of firewalls exist:

• Packet-filtering firewall: functions at the network and data-link layers of the OSI model. Packet-
filtering firewalls examine each data packet that passes through the firewall. Based on the defined
criteria, the packets are then filtered by the firewall. Packet filters can be configured based on the
IP addresses, port number, protocol ID, or MAC address of the packet. For example, if you
configure the packet-filtering firewall based on IP addresses, then the firewall will allow or deny
traffic based on IP addresses.

• Circuit-level firewall: functions at the transport layer of the OSI model. A circuit-level firewall
examines the TCP and UDP packets before opening a connection. After the connection is
opened, the TCP and UDP packets are allowed to pass through the firewall. Circuit-level firewalls
maintain a table of opened sessions. If the session information matches an entry in the session
table, then the data for that session is allowed to pass through the firewall. The session table is
dynamic. The moment a session is closed, the entry is removed from the session table.

• Application gateway firewall: implements rules on which the traffic control is based. The
application gateway firewall is an application-based firewall that also performs user
authentication.

Some other methods of filtering also exist. These methods lie between the two methods outlined above.
The operating system must be appropriately configured, or additional applications must be loaded on a
server to provide firewall services.

Proxy Server

A proxy server intercepts all requests made by clients from within a private network and forwards the
requests out to the Internet. A proxy server provides enhanced performance and enhanced security
through filtering and through hiding internal addresses. A proxy application must be installed on the
server to provide proxy services.

The proxy server maintains a cache of frequently accessed pages. If a request calls for a page that can
be found in the cache, then the proxy server preferentially serves out the page from the cache. If the page
cannot be found in the cache, then the proxy allows the request to go out to the Internet. When multiple
users are accessing the Internet, a proxy server can dramatically improve performance.

Rules can be set on the proxy server to disallow access to certain sites, if so required. A proxy server
performs network address translation (NAT). The internal IP number of a client computer is substituted
with that of the proxy server for outbound packets. For inbound packets, the proxy server refers to a state

www.transcender.com

12
CompTIA SK0-002 Study Guide

table and substitutes back the address of the internal client that had originally made the request. The IP
number of internal hosts does not appear in the Internet traffic, and this provides a level of security.

Database Server

This is a server dedicated to running a DBMS. In operation, network client would query the database, and
the resulting responses from the database would be sent to the client computer. A database management
system (DBMS) application must be installed on the server so that it may act as a database server.

Client/Server

A client is a networked computer that requests a service. A server is a networked computer that provides
a service. The client and the server are software processes and can reside on the same physical
computer.

There are two types of client/server architectures. A two-tier architecture is one in which the processing is
split between the client and the server. The efficiency of a two-tier application falls when the number of
clients is above 100. A three-tier architecture is one in which a third layer mediates between the client and
the server. The middle tier can be an application server, a transaction-processing monitor, a message
server, and so on. In general, a three-tier application scales better than a two-tier application. However, a
three-tier application may require more coding effort.

Application Server

This runs an application that must be used by a client computer or a user. A single copy of the application
resides on the application server, and clients may execute multiple instances of it. As far as the user is
concerned, the application appears to be running on the client computer. Upgrading the application with
patches and service packs is easy because only a single copy has to be upgraded.

Three types of application servers are common. A dedicated application server is a part of a three-tier
client/server application. The dedicated application server forms the middle tier, and its sole function is to
provide application services. Common uses of dedicated application servers are serving web pages and
hosting middleware. The middle tier reads data from the database on behalf of the client, manipulates the
data, and sends it on to the client.

A distributed application server is also part of a client/server application. Instead of a single server,
multiple servers access a back-end database. This is useful when applications must run in a WAN
environment. In this case, the client can connect to a conveniently located application server, which will
then provide services to the client.

A peer-to-peer application is not a client/server application. Each client runs an application, and different
servers exchange information when required. In effect, each server acts as both a server as well as a
client. Examples of peer-to-peer application servers are multiplayer games or file-sharing applications,
such as Kazaa.

www.transcender.com

13
CompTIA SK0-002 Study Guide

Mail Server

This is a server that deals with storing incoming email for users and sending outgoing e-mail from users.
In most cases, the mail server will be connected to the Internet. Generally, mail servers will run simple
mail transport protocol (SMTP) services for outgoing mail, and post office protocol version 3 (POP3) or
Internet mail access protocol (IMAP) services for incoming mail.

FTP Server

This is a server that allows files to be uploaded or downloaded using a file transfer service called File
Transfer Protocol (FTP). A client can use an FTP client program to connect with an FTP server. Normally
a login ID and password must be supplied to the server to connect. The client can upload files to and
download files from the FTP server. The FTP service must be running on an FTP server.

SNA Server

This provides connectivity to IBM mainframe and AS/400 servers. The SNA server is essentially a
protocol gateway between an SNA network and a TCP/IP-based PC network.

RAS Server

This allows remote clients to dial up and establish a point-to-point connection with a remote access server
(RAS) on a LAN. On successfully establishing a connection, the client has access to resources on the
LAN. RAS services must be running on a RAS server for it to work.

File and Print Server

This allows users to access files and print services on the network. The print and file server is the
repository of files and also provides networked print services. Sometimes separate servers provide each
of these functions. When configuring a print server, you will need to install the appropriate printer drivers
for any operating systems that may need access to the printer.

Fax Server

This allows users to send and receive fax messages through the network. The fax server is connected to
a phone line through a modem. The fax server appears on the client computer as a special printer. To
send out a fax, a user will usually print the document on the special printer. This sends the job to the fax
server, which will render the document and transmit it through the modem. On receiving a fax, the server
can be configured to forward the fax as an email message to the correct user. The operating system fax
service or a third-party fax service must be running on the server.

DNS Server

A HOSTS file is a file that resides on a computer that contains all hosts names to IP address mappings.
Using DNS eliminates the need to update every copy of the HOSTS file. DNS performs name resolution
for hosts on a TCP/IP network. On such a network, textual host names have to be translated to IP
addresses that identify the hosts. The DNS server maintains a database of host name to IP address
mappings. The DNS protocol updates the mapping periodically through a network of hierarchical DNS

www.transcender.com

14
CompTIA SK0-002 Study Guide

servers. On receiving a name service request, a DNS server returns the IP address corresponding to a
host to the requesting computer. The DNS service must be running on the server in order for DNS to
work.

WINS Server

An LMHOSTS file is a flat file that resides on a computer that contains all NetBIOS names to IP address
mappings. WINS eliminates the need to update every copy of the LMHOSTS file. WINS performs name
resolution for hosts on a Windows network. On such a network, textual NetBIOS names have to be
translated to IP addresses. The WINS server maintains a database of Netbios name to IP address
mappings. There was a time when, by default, Windows computers on a network used NetBIOS names
that needed translating to IP addresses. The WINS service running on a WINS server performed this
task. However, Windows Server 2003 uses DNS by default instead of WINS (although the WINS service
can be installed for legacy applications).

DHCP Server

In a TCP/IP LAN environment, every computer needs to have a unique IP address. You can assign an IP
address to a computer using two different methods. Static IP addressing requires configuring an IP
address, a subnet mask, a default gateway, and a DNS server address on a computer. Assigning static IP
addresses to all hosts on a network is laborious unless the network is small. Dynamic IP addressing using
DHCP automatically assigns IP addresses to computers on the network. A DHCP server maintains a
centralized pool of IP addresses and assigns IP addresses from the pool to computers as they connect to
the network. Each IP address is leased to a computer for a specific time period. When the lease ends, the
client computer must lease an address again.

Web Server

This is a server that stores the contents of Web sites. A client application called a Web browser is used to
specify the address of a Web page. The request is sent using the hypertext transport protocol (HTTP).

The address for the web page is specified as a uniform resource locator (URL). A URL is of the form
http://www.kaplanit.com.

In the URL, http indicates that the protocol used is HTTP. The rest of the address, www.kaplanit.com, is
the fully qualified domain name (FDQN) of the site. Implicit in this address is the name of the page to be
displayed. If the page name is not specified, the server will try to find a file called index.html or index.htm
and return it to the browser.

Tower Servers

Ordinary computers for home or office use may use a case that is called a desktop case. This is a case
that lies flat on the table with the monitor placed on top. Tower style cabinets are also very popular in
computers for home and office use. The tower case has a smaller footprint, which allows it to consume
less desk space.

Servers do not use the desktop style case. The full tower style case is used mainly because of the
internal space available in it. A full tower case provides many internal drive bays in both 5.25 and 3.5-inch

www.transcender.com

15
CompTIA SK0-002 Study Guide

sizes. These drive bays can hold a large number of disk drives. In fact, compared to a rack-mount server,
or blade server, the tower case server provides the maximum potential for adding a large number of hard
disks or storage devices of other descriptions. The tower case also offers the best cooling, since the
internal geometry of the case is favorable, and there is enough room to add auxiliary fans. A wide
selection of high-spec, high-quality power supply units is also available for tower type cases.

Figure 1-1 shows a desktop and a tower case.

Figure 1-1: Desktop and Tower Style Case

Rack-Mount Servers

Although the tower style case provides a lot of internal space, multiple servers can occupy a lot of floor or
table space. Rack-mount servers provide a convenient solution by allowing servers to be stacked up one
above the other.

Rack-mount servers use a cabinet called a rack. The industry standard rack is 19 inches wide and can
accommodate rack-mount cases of various heights. Case heights are indicated in U, for instance, 4U.
Each U is equal to 1.75 inches. A 5U case is thus 1.75x5=8.75 inches high. To mount it, you will slide the
rack-mount case into the rack from the front, and screw down the flange onto the rack.

The components within a rack mount case are the normal server components in function, except that they
are smaller. Some designs may allow hot swap drives to be plugged in from the front.

Each rack-mount server is a complete server and can be operated outside the rack if so desired. By
stacking servers one above the other, a significant amount of space is saved. The wiring of the servers
stays within the rack but stays accessible to maintenance personnel. Lockable doors are usually provided
on racks for security purposes.

www.transcender.com

16
CompTIA SK0-002 Study Guide

Blade Servers

A blade server takes the rack-mount server to an extreme. An entire server is manufactured in the form of
a plug-in unit the size of a slim disk drive. This is called a blade. Blades are plugged into a rack-mounted
chassis much like books are stacked on a bookshelf.

Each blade is a self-contained server with its own processor and independent operating system. Disk
drives and I/O port connectors are plugged in as modular units and have the same form-factor as a blade.
The chassis provides a backplane of connectors where everything plugs in. The chassis also provides a
common power supply unit and a common set of switches that all blades can share.

A blade server thus offers an extremely high-density setup that is modular, flexible, scalable, reliable and
easy to maintain. The blade server offers the best density of deployment, or, in other words, the highest
number of servers either per square foot of floor space or per cubic foot of server room space. A rack-
mount server offers the next best density, followed by the tower case server.

The downside of a blade server is that the extremely dense packing of devices leads to cooling problems.

KVM Switch

Although each server in a server room is provided with a dedicated monitor, mouse, and keyboard
combination, these remain idle for most of the time because a user needs to interact only rarely with a
server. However, the keyboards and monitors still occupy space and create clutter, especially where
there are many servers in the server room.

For this reason, it is common to use a Keyboard-Video-Mouse (KVM) switch with any multiple-server
installation. The KVM switch is a hardware box that routes the connections for a single keyboard, a single
mouse, and a single monitor to multiple servers. Electrical switches on the KVM switch allow the
keyboard-mouse-monitor triad to be connected to any server at the will of the user. Some KVM switches
allow the user to use the keyboard or mouse and select the server from on-screen menus.

www.transcender.com

17
CompTIA SK0-002 Study Guide

Memory
Scope
• Know server memory requirements and characteristics of types of memory.

Focused Explanation
There are several types of dynamic random access memory (DRAM) used in computers. Some of these
are listed below:

• Extended Data Out (EDO) DRAM: certain internal parts of the RAM are not turned off between
successive accesses, speeding up the RAM. Burst EDO (BEDO) DRAM offers even faster
operation by allowing a burst mode of operation. EDO DRAM is rarely found in computers these
days.

• Synchronous DRAM (SDRAM): is synchronized with the timing of the chipset and thus offers very
high speeds of operation. Most SDRAM operate at one of two standardized bus speeds: 100 MHz
and 133 MHz. SDRAM that runs at 100 MHz is called PC-100 memory, and SDRAM that runs at
133 MHz is called PC-133 memory. SDRAM is called synchronous because the RAM operates
synchronously with the motherboard's system clock. SDRAM uses special internal clock signals
and registers to organize data requests from memory.

Earlier implementations of SDRAM include PC-66 and PC-83, which are no longer manufactured
on a large scale. Occasionally, higher-speed SDRAM modules can operate at 150 MHz and 166
MHz, but these are not considered standard speeds.

• Double Data Rate (DDR) SDRAM: offers two transfers of data per timing cycle as compared to
ordinary SDRAM, effectively doubling the speed of the RAM. This makes DDR SDRAM very fast.
DDR SDRAM is a very popular type of RAM used in computers. This type memory uses a 184-
pin form factor and requires 2.5 volts.

• Double Data Rate 2 (DDR2) SDRAM: offers faster operation than DDR memory. DDR2 is a
double data rate RAM just like DDR, but the use of differential signaling within the chip speeds
DDR2 considerably. DDR2 speeds start at about 400 MHz and go up to 800 MHz. This RAM
features lower power requirements than DDR SDRAM. This type memory is not compatible with
DDR slots. DDR2 uses a 240-pin form factor and requires 1.8 volts.

• RAMBUS DRAM (RDRAM): is the fastest DRAM technology available currently. The word sizes
available are usually small, but the transfer speeds of data are very fast. RDRAM, which tends to
be expensive and limited in popularity, is reserved for computers that must deliver very high
performance. RDRAM began to acquire wide recognition with the release of computers that use
Intel's Pentium III processor.

www.transcender.com

18
CompTIA SK0-002 Study Guide

Table 1-2 indicates speeds for various types of RAM.

Memory Type Clock Speed Memory Speed (MT/s) Bus Width Memory Speed
Bus speed in megatransfers (bits) (Mbytes/s)
per second:
SDRAM
PC 66 10ns 66 66 8 533
PC 100 8 ns 100 100 8 800
PC 133 133 133 8 1066
DDR SDRAM
DDR 200 100 200 8 1600
DDR 266 133 200 8 2133
DDR 300 150 300 8 2400
DDR 400 200 400 8 3200
DDR 500 250 500 8 4000
DDR 533 266 533 8 4266
DDR2 SDRAM
DDR2 400 200 400 8 3200
DDR2 533 266 533 8 4266
DDR2 667 333 667 8 5333
DDR2 800 400 800 8 6400

Table 1-2: RAM Speeds

Unbuffered, Buffered, and Registered Memory

Buffering is a feature of memory modules. Unbuffered modules are connected directly to the memory bus
and tend to load the bus. This loading tends to slow down fast-changing data signals and may lead to
slow operation or errors.

Buffered modules have buffers so that the modules do not load the memory bus. This leads to more
reliable operation.

Registered modules have extra registers that boost the signal as it passes through the module. Because
the data must be stored in the register, a small but consistent delay is introduced in the signal. This is
tolerable and leads to greater reliability of the memory.

Buffered, unbuffered, and registered memory types cannot be mixed on a single computer.

Memory Interleaving

Some motherboards use a technique of interleaving to improve the throughput. Interleaved memory
partitions the sockets into two sections: odd and even. Memory is installed in pairs of sections.

www.transcender.com

19
CompTIA SK0-002 Study Guide

During an operation, the operations in the two sections can overlap. While one memory bank is finishing
an operation, the second memory bank can start an operation. This doubles the throughput rate, at the
cost of using twice as much memory.

ECC vs. Non ECC vs. Extended ECC

Parity memory is memory that has extra storage locations to store redundant data. The redundant data,
called parity, allows the detection of single bit errors per word of memory. Only detection of single bit error
is made possible by the inclusion of parity and correction is not permissible.

However, if a larger number of bits per word of memory is used for redundant data, then detection, and
correction of certain types of errors is possible. Detection and correction of one or two bits of data is
possible using the so-called error correction code (ECC) memory. ECC is the basis for error detection and
correction. Extended ECC allows correction of two-bit errors.

Non-parity memory stores no redundant data and cannot detect errors.

Memory Caching

The processor operates by fetching an instruction from memory, executing the instruction, and fetching
the next instruction from memory. Accessing the main memory takes a small amount of time. Anticipating
the instructions that the processor will require in the future and storing them in cache memory can reduce
this time.

Cache memory is high-speed memory, and the processor saves time by reading instructions from cache,
rather than from the main memory. A cache hit occurs every time the processor finds a required
instruction in the cache, which prevents the delay associated with reading the main memory. If the
requirement of the processor is wrongly anticipated or the cache simply contains instructions that the
processor does not require, then the cache contents are rejected and must be refilled. This is called a
cache miss, and this slows down the processing of a request. Effective cache management requires that
the cache misses be minimized. In practice, both instructions and data are cached. Most processors
employ more than one cache:

• Level1 (L1) cache – is integrated into the silicon on which the processor is fabricated. Level 1 is
divided between instruction cache and data cache. This results in extremely fast cache access
because the cache runs at the processor speed. L1 cache sizes vary from 16 kilobytes (KB) to
256 KB. Some processors have two L1 caches. The first L1 cache is used for storing
instructions, and the second L1 cache is used for storing data.

• Level2 (L2) cache – is encapsulated with the processor (in current designs) or resides on the
motherboard (older designs). Typical sizes for L2 cache are 128 KB, 256 KB, and 512 KB. L2
cache runs at memory bus speed and is slower than L1 cache.

• Level3 (L3) cache – is provided on the motherboard. The processor’s ability to use L3 cache
speeds up processing.

www.transcender.com

20
CompTIA SK0-002 Study Guide

Small Computer System Interface, and AT Attachment


Scope
• Know the differences between different Small Computer System Interface (SCSI) specifications.

• Know the differences between different ATA Integrated Drive Electronics (IDE) specifications.

Focused Explanation

SCSI

SCSI is an interface specification designed to offer very high data transfer rates. SCSI has been the
interface technology of choice for high performance hard disks, tape drives, CD-ROM drives, and
scanners. Advances in IDE technology have reduced some of SCSI’s popularity, but SCSI dominates in
situations where the best performance is required, such as in servers. The SCSI standard was created to
support the development of high-speed mass storage devices, such as hard disk drives and CD-ROM
drives, which could run on different computer platforms. The SCSI standard allows practically any type of
peripheral device that uses parallel data transfer, such as a scanner or printer, to be engineered so that it
will run on a SCSI bus.

SCSI supports both internal devices and external devices. A SCSI adapter is usually an expansion card,
which provides an internal connector for internal devices, an external connector for plugging in external
devices, or both. A single SCSI adapter can support up to seven devices (with newer version supporting
up to 15 devices). Internal SCSI devices are connected together by a ribbon cable that has multiple
connectors along its length. Each device plugs into the cable with the end of the cable plugging into the
SCSI interface socket. The string of devices connected to the SCSI interface is called the SCSI chain. For
external devices, each device plugs into the next in line, with the first device plugging into the adapter.

SCSI IDs and LUNs

To distinguish between devices, each device is assigned a distinct device ID – a number between 0 and 7
or 0 and 15 depending on which SCSI specification is used. If the IDs are not distinct, the SCSI chain will
not work. The adapter is considered a device and must be assigned an ID. Therefore, each SCSI adapter
supports 8 IDs for 8-bit SCSI buses, or 16 IDs for 16-bit, also called wide buses. The number of IDs
includes that for the adapter itself.

SCSI devices need to be assigned a SCSI ID. For external devices, this is done using a thumbwheel
switch or a rotary switch. For internal devices, the ID may be set through a jumper or through software.
The SCSI ID is used to uniquely identify a device on the bus, and the ID number indicates the relative
priority accorded to the device by the SCSI controller. The highest ID number (7 for narrow SCSI or 15 for
wide SCSI) is given the highest priority. The ID 0 is given the lowest priority. If two devices make
simultaneous requests for data, the device with the higher priority will be serviced first.

Note: Normally the highest priority ID is assigned to the adapter.

A logical unit number (LUN) is a secondary number, apart from the SCSI ID, assigned to a physical
device. The LUN allows subcomponents of the device to be addressed. For instance, a CD-ROM tower

www.transcender.com

21
CompTIA SK0-002 Study Guide

can have a single SCSI ID, with each individual drive designated with a LUN. This would enable an
individual drive in the tower to be addressed.

It is also possible to assign different LUNs to a single physical device. The device can be addressed
using different SCSI ID and LUN combinations. This is used in configuring multiple paths to devices
useful in RAID and fiber channel.

Multitasking, Multithreading, Connection, and Disconnection

During an operation, a device can make a request to the SCSI bus, and then disconnect from the bus.
This leaves the bus to perform other activities, such as servicing requests from other devices. When the
data requested by the device is available, it can connect again, and receive the data. This connection and
disconnection mechanism allows a SCSI bus to multitask: servicing several applications requesting I/O
more or less simultaneously.

SCSI also supports multithreading, in addition to multitasking. As indicated before, multitasking is the
ability to support several applications, which make simultaneous I/O requests. Multithreading, on the
other hand, is the ability to support an application that makes multiple I/O requests simultaneously. OSs,
such as Windows 2000 and Unix (including the MAC with OSX), provide multitasking and multithreading.
A SCSI system working under such an operating system can deliver enhanced throughput, especially
when servicing random requests that occur simultaneously.

www.transcender.com

22
CompTIA SK0-002 Study Guide

SCSI Specifications

SCSI has evolved over time and speeds increase with every new SCSI specification. Table 1-3 lists the
different SCSI specifications.

SCSI Version Bus Width Speed Bus Length (m) Interface Max. Number of
(MegaBytes per Connector Devices
second) Supported

SCSI-1 (narrow 8 bits 5 MBps 6 SE, 25 HVD 50-pin Centronics 8


SCSI) or DB 25

Fast SCSI-2 (narrow 8 bits 10 MBps 3 SE, 25 HVD 50-pin high density 8
SCSI-2)

Fast Wide SCSI-2 16 bits 20 MBps 3 SE, 25 HVD 68-pin high density 16

Ultra SCSI 8 bits 20 MBps 1.5 SE (8 devices) 50-pin high density 8 (with 1.5m SE
(SCSI-3 variant) bus)
3 SE (4 devices)
4 (with 3m SE bus)
25 HVD
8 (with HVD bus)

Wide Ultra SCSI 16 bits 40 MBps 1.5 SE ( 8 devices) 68-pin high density 8 (with 1.5m SE
(SCSI-3 variant) bus)
3 SE ( 4 devices)
4 (with 3m SE bus)
25 HVD
16 (with HVD bus)

Ultra2 SCSI (SCSI-3 8 bits 40 MBps 12 LVD 50-pin high density 8


variant)
25 HVD

Wide Ultra2 SCSI 16 bits 80 MBps 12 LVD 68-pin high density 16


(SCSI-3 variant)
25 HVD

Ultra3 SCSI or 16 bits 160 MBps 12 LVD 68-pin high density 16


Ultra160
(SCSI-3 variant)

Ultra3 SCSI or 16 bits 320 MBps 12 LVD 68-pin high density 16


Ultra320
(SCSI-3 variant)

Table 1-3: SCSI Specifications

www.transcender.com

23
CompTIA SK0-002 Study Guide

A 68-pin very high-density connector (VHDC) was introduced with Wide Ultra2 SCSI. This connector is
often used in RAID arrays.

Narrow and Wide Buses

SCSI buses are designated as wide or narrow. Wide buses are those that are capable of 16-bit data
transfers in a single operation. In general, Wide SCSI buses support up to 16 devices with the adapter
counting as a device. Narrow buses transfer 8-bits of data at a time and support up to eight devices.

Note: If the narrow or wide qualifier is missing, then assume wide bus for Ultra3 SCSI (Ultra160 SCSI and
Ultra320 SCSI) and assume narrow bus for earlier SCSI standards. HVD signaling is deemed obsolete in
Ultra3 SCSI.

Single-Ended, Differential, and Multimode Signaling

Normal SCSI signals are single-ended. A single-ended signal is carried over a single wire. At the
receiving end, the signal is sensed. If noise signals induce voltages on the signal wire, then data errors
can occur. Differential signaling uses two wires to send the signal – the difference in voltage between the
two wires represents the signal. If noise voltages are induced, then both wires are affected, and the
voltage difference between the wires is unaltered. At the receiving end, the voltage difference between
the wires is sensed, and the equal noise voltage induced in the wires is rejected as noise. Differential
signaling enables the uses of longer cables than single ended signaling.

SE buses can stretch typically up to 3 m, whereas differential buses can reach 12 m or even 25 m.

High-voltage differential (HVD) is supported by all except the latest SCSI variations. The exceptions are
Ultra160 SCSI and Ultra 320 SCSI. HVD uses 5-volt logic levels. It is used when devices are far apart
because the total cable length usable is 25 m.

Note: Logic levels are signal voltages used by digital electronic components. Digital electronic
components are the building blocks that make up most computer parts, including SCSI adapters.

Low-voltage differential (LVD) SCSI operates at 3-volt logic levels. LVD supports a maximum cable length
of 12 m. Compared to HVD, LVD requires less power, and is backward compatible with SE devices. In
addition, LVD devices are less expensive compared to HVD.

The HVD and SE methods have been used for a number of years, since the initial days of SCSI. The LVD
method was introduced much later along with Ultra2 SCSI. LVD is now the approved signaling method for
SCSI.

Multimode operation applies to LVD SCSI adapters. The SCSI chain will work with LVD signaling if all
devices are LVD. If both LVD and SE devices are present, then every device on the chain defaults to SE
signaling. Similarly, you can use an LVD device in an SE chain, but the device will switch to SE signaling.
LVD devices and SE devices use the same SCSI connectors. In general, most LVD devices and adapters
are multimode – they are designated LVD/MSE (for LVD/Multimode Single-Ended). The SCSI
specification does not insist that all LVD be LVD/MSE.

www.transcender.com

24
CompTIA SK0-002 Study Guide

HVD signaling is not compatible with LVD. Connecting an LVD device on an HVD chain shuts down the
device; however, connecting an HVD device to an LVD chain shuts down the chain. In addition, HVD and
LVD are not compatible with SE. Installing a SE device with LVD or HVD devices causes the entire bus to
switch to SE.

Termination

The first and last devices in the SCSI chain must be terminated. A terminator is a circuit that electrically
loads the SCSI signal cable and prevents unwanted signal reflections from the ends of the SCSI chain.
An un-terminated SCSI chain is unreliable.

For external devices, a terminator must be plugged into the interface connector on the external device.
Internal devices usually have a jumper setting for enabling the termination.

Plug-in terminators are available as active or passive devices. Active devices internally contain active
electronic components. They are better than passive devices, but they are more expensive.

Passive terminators contain a resistor chain and are inexpensive. The resistor chain dissipates the signal
at the ends of the bus and prevents reflections. In general, passive termination is only good for bus
lengths of two to three feet (approximately one meter) or less and should only be used with standard 8-bit
SCSI devices.

Active terminators are required for Fast or Wide SCSI, according to the SCSI-2 specification. An active
terminator has a voltage regulator that ensures the correct termination voltage. Many have an LED
showing the termination level. Various types of active terminators exist. LVD devices require special LVD
active terminators. Forced Perfect Termination is an active termination technique and provides very
reliable termination.

HVD devices require special HVD terminators. These are resistor-chain passive terminators.

Termination Considerations

If only internal devices are used, the SCSI adapter and the last internal device must be terminated. If only
external devices are used, then the last external device and the SCSI adapter must be terminated. If both
internal and external devices are used, then the last external device and the last internal device must be
terminated. The SCSI adapter must be left un-terminated in this case. Most modern SCSI adapters have
a software “auto terminate” setting. This allows the adapter to sense whether or not it is the last in the
SCSI chain and set its own termination appropriately.

Cables and Connectors

There are different SCSI cable variations, and each SCSI cable is designed for different types of SCSI
devices. These cable types are as follows:

• A-Cable: This is a 50-pin Centronic cable used for external SCSI connections.

• B-Cables: This is a 50-pin D-connector cable used for 16-bit parallel transfer in SCSI-2 devices. It
was specified for 32-bit parallel transfer, but was never implemented.

www.transcender.com

25
CompTIA SK0-002 Study Guide

• P-Cables: This is a D-connector that provides 16-bit transfers. This cable is a 68-pin cable.

• SCA (Single Connector Attachment): This resembles a connector type rather than a cable type.
An SCA connector combines the 68 pin P-cable connections, the four wire power connections,
and a few jumper setting connections into a consolidated 80-pin connector. The SCA connector is
used mainly for hot swap disk drives. The use of the consolidated connector, in place of several
connectors, makes hot swapping physically feasible.

IDE

IDE is a specification for disk drives and interfaces. The drive controller, which is the electronic circuitry
responsible for translating the storage requests issued by the CPU to the voltages that move the read-
write heads of the drive, are integrated within the drive. The drive communicates with the motherboard
through a simple, and low-cost, interface, which may be integrated on the motherboard.

The early IDE specification evolved to Enhanced IDE (EIDE), and the A.T. Attachment 2 (ATA 2)
specification. Current ATA specifications include ATA66, ATA100, and Serial ATA. Although the original
IDE specification is long obsolete, the term IDE is still in use and generally refers to the current ATA
specifications.

Jumper Settings

ATA drives use a ribbon cable to connect to the interface. The ATA standard allows either one or two
drives on a single interface, and modern motherboards include two ATA interfaces, which are also called
ATA channels. An ATA interface is compatible with hard disk drives, CD-ROM drives, tape drives, and
other devices. In a configuration with two drives on a single channel, the ATA standard requires that one
drive be configured as a master and the other as a slave. This is necessary because computers require a
means to distinguish between drives that are connected to the system board by a common cable. A
jumper setting on a drive identifies it as either master or slave to the host computer.

As an alternate identification scheme, ATA uses a jumper setting named Cable Select. The Cable Select
setting identifies the drive based on the connector location on the cable; one connector is the Master
connector, and the other connector is the Slave connector. The Cable Select setting will work only when a
special cable is used and when the controller is capable of supporting this setting. The basic input/output
system (BIOS) must also support Cable Select, and that setting must be enabled within the
complementary metal-oxide semiconductor (CMOS) setup.

Note: The master drive is considered to be the default boot drive and usually contains the operating
system. Some brands of hard disks require one jumper setting for a single hard disk and a different
setting for a master disk in a two-disk setup. Master/slave configuration of ATA devices is not set on the
motherboard.

Cable Orientation

ATA drives are connected using a 40-conductor or an 80-conductor flat cable. The connector for an ATA
cable may or may not be keyed. Usually, the cable has a red stripe running down one conductor of the
ribbon. The ATA channel connector on the motherboard has one pin labeled pin 0. The ribbon cable must

www.transcender.com

26
CompTIA SK0-002 Study Guide

be plugged in so that the red conductor goes to the pin 0 side of the motherboard connector. On the disk
drive end, the red stripe must be adjacent to the power socket on the disk drive.

The ATA Packet Interface (ATAPI) specification allows non-hard disk devices to be connected to an ATA
interface. This allows an ATAPI CD ROM drive to be connected to an ATA channel. ATAPI tape drives
may also be plugged into the ATA interface.

ATA is sometimes referred to as Intelligent Drive Electronics (IDE) or Extended IDE (EIDE).

Table 1-4 summarizes the characteristics of the IDE/EIDE/ATA specifications.

Specification Data Throughput Number of IDE Number of Cable Type


Speed (Mega Bytes (ATA) channels per Drives
per second) Interface Supported per
Channel

IDE (corresponds to Up to 3.3 MBps 1 2 40-pin ribbon cable


ATA)

EIDE (corresponds to Up to 16 MBps 2 2 40-pin ribbon cable


ATA 2)

ATA 3 Up to 16 MBps 2 2 40-pin ribbon cable

ATA 33 (Ultra DMA Up to 33 MBps 2 2 40-pin ribbon cable


33) (80-pin high density
ribbon cable optional)

Ultra ATA 66 Up to 66 MBps 2 2 80-pin ribbon cable

Ultra ATA 100 Up to 100 MBps 2 2 80-pin ribbon cable

Ultra ATA 133 Up to 133 MBps 2 2 80-pin ribbon cable

Serial ATA (SATA) Up to 160 MBps and 1 per port 1 7-pin unshielded
higher round cable

Serial ATA II Up to 300 Mbps 1 to 15 per port (with 1 7-pin unshielded


port multiplier) round cable
(SATA II)

Table 1-4: IDE/EIDE/ATA Specifications

The ATA 133 interface is promoted by Maxtor. As ATA 133 is patented by Maxtor, the interface is
somewhat proprietary.

www.transcender.com

27
CompTIA SK0-002 Study Guide

Serial ATA, or SATA, is a serial bus that provides a high-speed interface for SATA hard disks. SATA
specifies a signal transfer rate of 1.5 Gbps (gigabits per second), which translates to a burst mode data
transfer rate of 150 MBps (megabytes per second). The signal rate for SATA II is 3.0 Gbps,
corresponding to a burst data rate of 300 MBps.

Note: The burst mode data transfer rate is the best-case data rate and cannot be continuously sustained.

Each SATA interface supports a single device connected in a point-to-point manner using a single cable.
The ribbon cable used in conventional or parallel ATA (PATA) obstructs airflow in a computer. The thin
SATA cable is easier to route and does not impede airflow in a server case.

SATA II offers the following enhancements over SATA:

• Data transfer rate enhanced to 300 MBps.

• Native Command Queuing that allows a SATA drive to take multiple data transfer requests,
queue them, and reassign priorities to the requests. The priorities are accorded in a manner that
increases the data read and write efficiencies of the disk.

• Enclosure management where the enclosure is the case that contains drives used for enterprise
storage, for example, RAID cabinets. Enclosure management is a monitoring and control system
for several parameters significant to enterprise storage enclosures, including the parameters
below.

o Enclosure Temperature

o Power Supply status

o Card status

• Port multiplier is a simple and inexpensive means of increasing the number of drives that can be
used with a single interface. A port multiplier, in theory, allows up to 15 drives to be connected to
a single port. In reality up to four to eight drives per port is practical.

• Upgrade path to Serial Attached SCSI (SAS), which is a standard that will allow both SATA and
SAS drives to be used with SAS buses. Since SATA drives are cheaper than SCSI disks of the
same capacity, this will be beneficial for entry-level servers.

Serial attached SCSI (SAS) is a serial SCSI standard, which prevents the limitation of 16 devices
per SCSI bus, potentially allowing thousands of drives to be connected. Devices are attached in a
point-to-point fashion on the serial bus. A single SAS interface runs at 1.5 Gbps or 3 Gbps and
can optionally be fanned out to drive multiple drives. SAS adopts reliable and well-understood
parts of the physical layer technology used in Gigabit Ethernet.

The enhancements listed above are published as Revision 1.2 of the Serial ATA II specifications.

Note: A SATA interface can actually provide multiple interface connectors at a backplane. The backplane
is located in proximity to installed SATA drives. A single cable connects the backplane to the SATA

www.transcender.com

28
CompTIA SK0-002 Study Guide

adapter. Without the backplane, multiple cables would have to be routed to the drives, leading to
congestion in the server case.

It is planned that SATA III, when introduced, will offer speeds up to 600 MBps.

Logical Block Addressing

Logical Block Addressing (LBA) allows a computer to support hard disks with a capacity greater than 504
MB, the largest disk size that the BIOS can directly recognize.

LBA is a form of geometry translation. The physical geometry of drives includes the number of cylinders,
number of heads, and the number of sectors on a track on the drive. For historical reasons, the BIOS of a
computer is unable to work with every combination of cylinders, heads, and sectors (CHS). Modern drive
manufacturing techniques require that drives be manufactured with combinations of CHS that BIOS
cannot work with. LBA performs a translation, which takes an unfavorable CHS combination and
generates an equivalent CHS combination acceptable to the BIOS. In operation, the LBA translation
mechanism quietly mediates between the BIOS and the drive, translating geometry information back and
forth.

LBA support is not related to the ability of a computer to use PnP devices.

Note: The 504 MB limit is sometimes referred to erroneously as a 528 MB limit. The BIOS limit is
528,482,304 bytes, and one MB is equal to 1,024 bytes multiplied by 1,024 bytes, which is equal to
1,048,576 bytes. Therefore, the MB limit is 528,482,304 bytes / 1,048,576 bytes per MB = 504 MB.

www.transcender.com

29
CompTIA SK0-002 Study Guide

Features of Fiber Channel Hardware, Internet SCSI, and Fiber Channel


over Internet Protocol
Scope
• Know the features of fiber channel hardware.

• Know the features of Internet SCSI (iSCSI) and Fiber Channel over Internet Protocol (FCIP).

Focused Explanation

Fiber Channel

Fiber channel (FC) is a serial bus that uses fiber optic or copper cables to connect servers to storage
devices. FC connects servers to storage devices using an alternate network that is separate from the
Ethernet infrastructure in an organization.

The topology of a FC setup would be one of the following:

• Point-to-point: A server and a device are connected in a direct fashion.

• Arbitrated-loop: All devices are connected in a loop. Hubs are used to connect devices to the loop
and failure of a device brings down the loop.

• Switched fabric: All devices are connected to fabric switches. The switches provide dedicated
connections between the devices. Devices can be pulled off the network without bringing the
network down.

Regardless of topology, normal network client computers on the Ethernet network do not have direct
access to the storage hardware on the FC. Clients make requests to the servers, which read data off FC
and present it to the clients. Servers connect to the Ethernet using NICs and to the FC using host bus
adapters (HBAs).

FC is optimized for block level, rather than file-level access, which makes it particularly suitable for storing
databases. Table 1-5 lists the speeds of FC.

Speed Throughput (MBps)*

1G 200

2G 400

10 G 2400

Table 1-5: FC Speeds

The advantages of FC are flexible topologies using an un-terminated serial bus. FC infrastructure offers
high data transfer rates and easy scalability. Bus lengths reach up to tens of kilometers.

www.transcender.com

30
CompTIA SK0-002 Study Guide

FC hardware includes gigabit interface converters (GBIC). These are transceivers that convert electrical
signals from HBAs and storage device controllers to serial optical signals suitable for fiber-optic cables or
serial electrical signals suitable for copper cables. The reverse conversion is also available.

GBICs offer two modes of operation: shortwave and longwave operations that are related to the
wavelength of the light used in optical transmission. Small form factor pluggable SFP GBICs have the
same function as regular GBICs but are smaller in size. These are often used in switching FC and Gigabit
Ethernet mixed media networks. Hot plug copper GBIC connectors connect to FC components through a
DB9 connector or a high speed serial data connector (HSSDC).

iSCSI and FCIP

The iSCSI protocol is used for networked storage. It allows SCSI commands to travel over a TCP/IP
network to store or retrieve data from a remote location. The TCP/IP network is typically an Ethernet
network. The iSCSI protocol allows block-level access to data, much as if the storage device were locally
attached to the computer. Normal storage area networks (SANs) allow file-level access to data. A low
level access, such as block-level, yields superior performance in database operations, when compared
with file level access.

An application or user on the network requiring data is called an initiator. Software residing on the
initiator, along with the operating system, generates SCSI commands. The commands are encapsulated
in a packet and sent to the target, which is a network server that has attached storage. The requested
data is provided by the target and sent back to the initiator. As far as the initiator is concerned, it is as if it
made a request to a local SCSI device.

The fact that iSCSI can work over an Ethernet network makes it very attractive. No investments have to
be made in installing a specialized network. The throughput in an iSCSI setup is high and reliability is
high.

Storage arrays used with iSCSI can be RAID arrays attached to servers. An initiator makes a request
through an iSCSI adapter, which is a specialized NIC. The NIC contains a specialized TCP/IP stack that
is used because storage cannot be handled with the standard TCP/IP stack. Some adapters use a
TCP/IP offload engine (TOE) to lighten the load. Performance, with or without TOE, is high. Most iSCSI
adapters appear as SCSI adapters to the initiator operating system while a very few look like NICs. Bus
length limits of Ethernet apply to iSCSI.

Using Fiber Channel Over IP (FCIP), a client computer sends a fiber channel request over an IP network
to a fiber channel SAN. The request is encapsulated in a frame and tunneled through the IP network to a
remote server that is attached to a fiber channel SAN. The response data is tunneled back to the client.

Storage arrays and adapters used in iSCSI are SCSI disks, SCSI RAID arrays, and SCSI adapters. FCIP
tends to use SCA type SCSI disks. The FC SAN array communicates with the rest of the FC network
through optical cables terminating into FC HBAs.

FC HBAs are normally PCI-X or PCI-Express adapters and offer speeds of 1 or 2 Gbps. In theory, the 1
G, 2 G, and 10 G speeds defined for FC apply to FCIP. Bus lengths of FC apply to FCIP.

www.transcender.com

31
CompTIA SK0-002 Study Guide

RAID
Scope
• Know the features of different Redundant Array of Independent Disks (RAID) levels.

Focused Explanation

RAID

One of the most failure prone devices in a server is the hard disk. If a hard disk in a server fails, then the
event is quite catastrophic. Without special fault-tolerant mechanisms in place, complete recovery is
rarely possible. In any case, even partial recovery from a disk-failure is always time-consuming.

Fault tolerance in disk storage is achieved through a scheme known as RAID. Several RAID levels are
defined, each providing a different level of fault tolerance. All RAID levels use multiple identical hard
disks. The disks are collectively referred to as an array. Most RAID levels store extra data, called
redundant data on the disks. In most cases, the redundant data is generated by the RAID system by
applying mathematical operations on the original data that must be stored. The extra storage space
required for storing the redundant data represents overhead. In the event of failure of a single disk in the
array, the lost data is regenerated from the redundant data stored in RAID. The RAID array will normally
continue to work at a lower level of efficiency in case of a single disk failure. The failed drive must be
replaced quickly to resume normal levels of efficiency. Recovery from failure of more than one disk in the
array is not defined. Statistically, the chances of two disks failing at the same time are very small.

RAID 0

This RAID level is called striping and does not provide fault tolerance; it provides fast read and write
performance for the array. Any data to be stored is striped across two physical hard disks. Because the
disks work in parallel, the read and write performance is improved. RAID 0 requires two or more disks for
the array. If any drive fails and no backup exists, the data is lost.

RAID 1

This RAID level is called mirroring and offers fault tolerance but uses a very large overhead of 50%. The
write performance tends to be poor. Any stored data is saved twice, once on each of two disks. RAID 1
requires two disks for the array.

RAID 5

This RAID level is called striping with parity and offers reasonably low overhead and good performance.
Stored data is striped along with parity data across disks. RAID 5 requires three or more disks for the
array.

www.transcender.com

32
CompTIA SK0-002 Study Guide

RAID 10

This is a combination of RAID 0 (striping) and RAID 1 (mirroring). Basically a RAID-1 mirror is striped in
the RAID- 0 fashion. Although RAID 10 (sometimes written as RAID 1+0) offers very good fault tolerance
and very good performance , it has high overhead. RAID 10 requires four disks to implement the array.

RAID 0+1

This is a RAID-0 stripe array, which is then mirrored in RAID-1 fashion. RAID 0+1 offers very high
performance but medium fault tolerance. It has high overhead and is expensive.

RAID 0+5

This is a RAID-0 array that is further written out as a RAID-5 array. The RAID-0 array is considered one
“drive” in the RAID-5 array. For example, if a RAID-0 array consists of two drives A1 and A2, then a
minimal RAID 0+5 will consist of A1 and A2 as the first “drive” for RAID 5, additional drives A3 and A4 as
the second “drive” for RAID 5, and additional drives A5 and A6 as the third “drive” for RAID 5.

RAID 5+0

This is a RAID-5 array that is further written out as a RAID-0 array. The RAID-5 array is considered one
“drive” in the RAID-0 array. For example, if a RAID- 5 array consists of three drives A1, A2, and A3, then
a minimal RAID 5+0 will consists of A1, A2, and A3 as one “drive” for RAID 5 and additional drives A4,
A5, and A6 as the second “drive” for RAID 0.

RAID 5+1

This is a RAID-5 array that is mirrored in RAID-1 fashion onto another RAID-5 array. RAID 5+1 offers very
high fault tolerance at a very high cost. A minimal RAID 5+1 array setup consists of 6 drives. RAID 5+1
can keep working even if all three drives from one RAID 5 array and one drive from the second RAID-5
array have failed.

www.transcender.com

33
CompTIA SK0-002 Study Guide

Table 1-6 shows the comparison of different RAID levels.

RAID Minimum Storage Capacity Storage Efficiency Fault Tolerance


Level # of Disks (N = number of disks used)
Required
0 2 (Smallest Disk Size) * N 100% None

1 2 (Smallest Disk Size) * N/2 50% Very Good

5 3 (Smallest Disk Size) *(N-1) 100*(N-1)/N % Good


(storage efficiency rises as
number of disks in array
increases)
10 4 (Smallest Disk Size) * N/2 50% Very Good

0+1 4 (Smallest Disk Size) * N/2 50% Good

0+5 6 (Smallest Disk Size) *(N-2) 100*(N-2)/N% Better than RAID 5,


worse than RAID 5+0
5+0 6 (Smallest Disk Size) *(N-1) 100*(N-1)/N% Very good

5+1 6 (Smallest Disk Size) *(N-2) 100*(N-2)/2N% Best of all RAID levels

Table 1-6: Comparison of RAID Levels

In Table 1-6, the formulas for storage capacity and storage efficiency are slightly simplified for the
multiple RAID levels 10, 0+1, 0+5, and 5+0.

Hardware and Software RAID

Hardware RAID consists of an array of disks and a hardware RAID controller. The controller contains a
dedicated I/O processor that defines RAID levels on physical hard disks, generates redundant data, and
rebuilds data in case of failure. The I/O processor carries out RAID- related processing without burdening
the server CPU. In addition, the hardware RAID controller has an I/O controller and interface components
to which the hard disks physically connect.

In many installations, a server contains an option for providing software RAID. Such a server has
additional interfaces for hard disks provided on the motherboard. In most cases, the interfaces provided
are ATA interfaces, which keep the costs low. No RAID-specific intelligence is provided with these
interfaces: all tasks relating to array creation, generation of redundant data, recovery from a failed drive,
and so on, must be performed by software using the CPU of the server. The software is provided by the
motherboard vendor, and usually consists partly of firmware in the BIOS of the server, and partly of an
operating system-specific installable device driver. Software RAID is resource hungry and loads the
server on which it is running. Additionally, software RAID cannot be used as a boot disk, since the RAID
array is not usable until the computer has booted up.

www.transcender.com

34
CompTIA SK0-002 Study Guide

RAID I/O Steering and Zero Channel RAID

RAID I/O Steering (RAIDIOS) is a low-cost method to add hardware RAID capabilities to a server. A
hardware RAID controller carries out various tasks such as configuring the arrays, generating redundant
data, rebuilding data if a drive fails, and communicating with the drives in the array. These tasks are
carried out by two specific components: the I/O controller and the I/O processor. A RAIDIOS-capable
motherboard contains an I/O controller that can be used for driving normal SCSI disks. As it stands, the
motherboard does not have the ability to serve as a hardware RAID controller. The motherboard,
however, can accept a plug-in card that provides this capability. The plug-in card provides an I/O
processor, or, where applicable, allows a motherboard-resident I/O processor to be used for the purposes
of controlling hardware RAID. This results in a low cost implementation of hardware RAID.

Modular RAID On MotherBoard (MROMB) is synonymous to Zero Channel RAID (ZCR). A plug-in card
containing an I/O processor provides hardware RAID capability to a motherboard. The plug-in card has
no I/O channels, hence the term zero channel.

www.transcender.com

35
CompTIA SK0-002 Study Guide

Server Fault Tolerance, Scalability, Availability, and Clustering


Scope
• Discuss reliability and availability provided by hot swap drives and hot plug boards, multiprocessing,
scalability, and clustering.

• Discuss the processor subsystem of a server.

• Discuss SAN and NAS.

Focused Explanation

Server Fault Tolerance

Two approaches are used to ensure a server will not shut down so that all server software services keep
running. You can use a server that has fault tolerant components. Such a server uses dual instances of
components that are critical to operation. In the event that one component fails, a failover mechanism
brings up the second instance of the component. You can also use several servers combined in a fault-
tolerant manner. Server clusters are the predominant way of achieving this.

Fault-tolerant components in servers include NICs, power supply units (PSUs), and memory modules.
Usually, the extra NIC or PSU units are on standby and come up only when a failover is necessary.
Monitoring systems on the server initiate the failover. Memory module fault tolerance is provided in a
different manner: special memory modules that can store redundant data are used. These are the error-
correcting code (ECC) RAM modules. The ECC code is generated like RAID redundant data and stored
in extra space in the RAM. In case of single-bit errors, the ECC is used to correct the data.

Server availability is a measure of the degree to which a server is operable. Availability is increased by
using measures like adapter teaming and clustering.

Scalability, Availability, and Clustering

Server clustering comes in two main types: stateless and stateful. The goal of stateless clustering is to
provide performance scaling and is not of relevance in fault tolerance. Fault tolerance is achieved in
stateful clustering. As in every type of clustering, stateful clustering involves connecting multiple
computers in a manner that client computers and users perceive only a single entity. Each server in the
cluster is a node in the cluster. Forms of stateful clustering are as follows:

• Active/active clustering: All nodes participate actively in the cluster. If one node fails, the
remaining nodes take on the failed node’s workload.

• Active/standby clustering: Nodes are divided into active-standby pairs. If the active node in a pair
fails, then the standby node takes over. This is an expensive solution because the standby nodes
are usually idle.

Clustering is a major method of providing reliability, availability, and scalability. Stateful clustering
provides reliability, as well as availability, where in spite of failure of individual nodes, the cluster remains

www.transcender.com

36
CompTIA SK0-002 Study Guide

operational. Stateless clustering provides scalability, where scaling up the performance merely entails
adding more nodes to the cluster.

www.transcender.com

37
CompTIA SK0-002 Study Guide

Processor Subsystem
Scope
• Discuss the processor subsystem of a server.

Focused Explanation

A processor subsystem is a unit containing one or more processors with the subsystem containing
memory and I/O capabilities. Different subsystems can be plugged into a host computer system, or a
server, to build a system that provides vastly enhanced throughput. The bus signaling protocol of the
processor subsystem may be different from the bus signaling protocol of the host computer. A bus
conversion device on the processor subsystem allows the two buses to cooperate.

Symmetric multiprocessing (SMP) is anther way of speeding up a server, although the speed increase is
not so dramatic. SMP uses multiple CPUs under a single operating system, sharing memory and I/O
space. The multiple processors simultaneously execute software processes, thereby speeding up the
server. N-way multiprocessing refers to the number of processors sharing memory in a node. For
example, 4-way SMP uses 4 processors, and 16-way SMP uses 16 processors.

Every task that a computer carries out is composed of sub-tasks. The smallest part of a task that can be
executed is a thread. If a computer can execute several threads simultaneously, then processing speeds
up. This is called multithreading. Further, if the computer can execute several tasks simultaneously, then
again execution speeds up. This is called multitasking.

A single CPU allows a computer to carry out multitasking and multithreading, provided the operating
system has support for these. However, SMP, along with appropriate operating system support, can carry
out multitasking and multithreading in a much faster manner, and thus make the computer work faster.

64-bit Server Environments

A 64-bit environment is a hardware platform, along with operating system support, that enables using 64-
bit quantities as a word of data. In other words, a 64-bit environment uses 64-bit quantities as a single
unit of data. 64-bit hardware platforms have been released in the form of a motherboard and boxed
processor. The platform includes the Athlon 64 and Opteron processors from AMD and the Itanium
processor from Intel. Chipsets and motherboards for 64-bit computing are also available. Microsoft
released a 64-bit version of Windows Advanced Server for Intel's Itanium 2 processor in 2002.

www.transcender.com

38
CompTIA SK0-002 Study Guide

Storage Area Network and Network Attached Storage


Scope
• Discuss Storage Area Network and Network Attached Storage.

Focused Explanation

SAN and NAS

SAN employs a dedicated back-end network to provide storage. Servers on a LAN use an FC fabric
switched network to connect to one or more enterprise storage devices, such as RAID. Clients on the
network use their Ethernet network to access the servers. Using the newest fiber-optic cables, the range
of the SAN can stretch up to 10 km. SANs can also extend up to 100 km or above but are rare and
expensive. A SAN stores block-oriented data and is most efficient for databases where the data is stored
as blocks rather than files.

NAS uses storage devices that directly attach to the LAN. The storage devices form a pool and use a
single filesystem. Some NAS allow simultaneous file access to different operating systems, such as Unix
and Windows.

NAS is best suited for file access on the network and for transfers involving small-block sizes. NAS uses
an existing Ethernet infrastructure and does not need an expensive dedicated network like SAN. NAS is
also more suited for burst transfers involving a large number of I/O operations per second (IOPS).

The NAS storage device, sometimes called a filer, is relatively inexpensive, and can be set up quickly. A
SAN infrastructure, on the other hand, is much more expensive. Internet service providers (ISPs) often
use NAS to provide shared storage for Web servers.

The range of NAS is limited to the range of the network it is used on, which is usually the segment length
of Ethernet.

www.transcender.com

39
CompTIA SK0-002 Study Guide

Review Checklist: General Server Hardware Knowledge


Know the characteristics, purpose, function, limitations, and performance of system bus
architectures.

Know the characteristics of adapter fault tolerance.

Know the basic purpose and function of various types of servers.

Know server memory requirements and characteristics of types of memory.

Know differences between different SCSI specifications.

Know differences between different ATA (IDE) specifications.

Know the features of fiber channel hardware.

Know the features of iSCSI and FCIP.

Know the features of different RAID levels.

Discuss reliability and availability provided by hot swap drives and hot plug boards,
multiprocessing, scalability, and clustering.

Discuss the processor subsystem of a server.

Discuss SAN and NAS.

www.transcender.com

40
CompTIA SK0-002 Study Guide

Installation and Configuration of


Server Components

www.transcender.com

41
CompTIA SK0-002 Study Guide

Server Installation
Scope
• Discuss pre-installation activities.

• Discuss the Hardware Compatibility List (HCL)

• Discuss installation best practices.

Focused Explanation
Installing servers involves carrying out several tasks. Each task is a prerequisite for the next. The first set
of tasks consists of the pre-installation activities, which includes planning the installation and verifying the
plan.

Plan the Installation

Planning the installation requires determining what you will achieve through your installation. After you
have determined the goal, create a project plan that lays out the solution. Any plan must clearly state
which task is to be performed, how long it takes to complete the task, and how much it costs to complete
the task.

Formal documents should be generated as the output of the planning process, including the project plan,
timeline, and cost analysis.

After you have created the documents, get management’s approval for the project. This approval
performs an essential verification step, which is part of the verification process.

If a vendor is required to do any part of the work, a statement of work (SOW) must be created where all
the deliverables are clearly defined. An approved SOW clearly defines the requirements and
responsibilities of both parties.

Verify the Plan

Once the project plan, timeline, and cost analysis document are ready, verify the following details:

• The hardware and software that you are going to install are adequate and otherwise suited to the
purpose.

• The hardware you will install is compatible to the operating system (OS) and other software that you
intend to use.

• The physical and electrical infrastructure required is available or will be available at the time of
installation. For example, there should be sufficient UPS capacity, space in the server room, and
network bandwidth available for the new installation. If these are insufficient, there should be a plan to
upgrade the infrastructure.

www.transcender.com

42
CompTIA SK0-002 Study Guide

• The network protocols and domain names compatibilities are maintained and naming conventions are
not violated.

A review of the project’s documents from management, a peer, and a vendor ensures higher quality and
technical accuracy. A management review enables management to bring a high-level perspective and
added value to the project. A peer review, conducted by an individual with your expertise, ensures the
project’s documents are technically accurate. A vendor review can bring clarity to expectations and
responsibilities before the SOW approval.

One important verification step impacts all equipment purchased for the project: verify that supplied
material conforms to what you had specified.

HCL

Operating system vendors publish what is generically referred to as the Hardware Compatibility List
(HCL). The HCL indicates the hardware that is guaranteed to work with the operating system. If all the
components of a server that you are working with are listed in the HCL of the operating system, then you
should face no compatibility issues.

Installation Best Practices

Installation of internal server components requires you to take electrostatic discharge (ESD) precautions.
ESD occurs when static electricity builds up on equipment and humans. Opposing charges neutralize
each other through a discharge. The ESD voltages involved are quite high with a range of tens of
thousands of volts. The extremely high voltages do not pose a danger to humans because the current
involved in a discharge is low. However, the discharge currents pose a real threat to computer circuitry
because they permanently destroy the chips used in circuits.

ESD best practices are as follows:

• Before touching any computer circuit board, discharge your fingers by touching a grounded metal
object, such as a computer case that has a power cable plugged into a socket. The power cable of a
computer case grounds the case, and the grounded case discharges the static in the fingers.

• Use an antistatic wristband. The wristband has a strap that must be connected to an electrical
ground. It also includes a high value resistor that takes away static charges from the hands. At the
same time, the resistor prevents high currents from passing to the ground, thus protecting the user in
case there is an electrical fault in the equipment.

• Use an antistatic mat. Any equipment to be opened may be rested on an antistatic mat, which
prevents electrostatic discharges.

• Avoid the use of carpets, particularly synthetic carpets, on server room floors. Carpets cause static
build-up.

• Use antistatic sprays around equipment to be opened. Antistatic sprays suppress static buildup.

• Use antistatic bags to store or carry circuit boards. Antistatic bags prevent static buildup, thus
protecting components during transportation.

www.transcender.com

43
CompTIA SK0-002 Study Guide

Installation Practices for Internal Components

While installing internal components there are some general practices to follow. First, ensure that the
computer is powered down and that the power cable is disconnected. Most modern computers never
really block all power to motherboards, even when powered down. Dropping a screwdriver on a
motherboard is risky, unless you have first disconnected the power cable.

Install internal components within the server case first. The following best practices may be considered
when installing internal components:

• While installing boards, check that bus slot compatibility is maintained. A PCI board, for instance, will
not fit into an EISA slot.

• While installing ATA drives, ensure that master/slave jumper settings are correct. For SCSI drives,
ensure the device ID is correct and that the SCSI bus is correctly terminated. The orientation of
cables must be correct.

• While installing processors, ensure compatibility of the processor with the motherboard. The
documentation that came with the motherboard will indicate the processors that can be used. While
installing processors in a multiprocessor server, ensure that the stepping level is the same for all
installed processors. The stepping level of a processor is version-related compatibility identification.
Some motherboards may provide jumpers that must be set to indicate the presence of additional
processors. The motherboard documentation indicates the required jumper settings.

• While installing memory, ensure that the type and speed of the memory module is correct. The type
will dictate whether the module will fit into the slot on the motherboard. Server motherboard
documentation will indicate memory types that may be used. It is important to ensure that all memory
modules installed in a server be matched as closely as possible to each other. First, match memory
modules in terms of the speed and the type of module. Furthermore, ensure that all installed memory
modules are from the same manufacturer.

• While routing the internal cables, ensure that airflow to the CPU and the memory modules is not
blocked.

• Where necessary, extra fans should be fitted inside the system case to provide extra cooling. A CPU
fan is used in addition to the power supply fan and mounts on top of the CPU to provide additional
cooling for the CPU.

Mount the server properly in the rack, by screwing it down to prevent it from being removed. It is best to
install heavier items, such as a rack-mount UPS, at the bottom of the rack to prevent the rack from getting
top heavy.

Depending on your organizational standards, a UPS may be a part of every rack, central to the whole
server room, or central to the entire computing facility. If the installed UPS capacity is not adequate, then
additional UPSs may need to be installed. The power rating of a UPS is the maximum power that the
UPS can supply and is expressed in kilowatts (KW) or in kilovolt-amperes (KVA). The backup duration, or
run time, of the UPS is the amount of time that the UPS is able to supply the power rating. Keep in mind
that if the load on the UPS is less than the maximum power rating, then the backup duration increases.

www.transcender.com

44
CompTIA SK0-002 Study Guide

Some UPSs are modular and allow the addition of power modules to expand UPS capacity. The battery
backup time can also be augmented in such UPSs by adding battery modules. In general, modular UPSs
use a case called a frame that allows power modules and battery modules to be hot plugged into bays
provided in the front. In most cases, the top half of the frame is reserved for power modules. Battery
modules can be plugged in any available bay, but it is best to install batteries as low as possible within
the frame to prevent the UPS from becoming top heavy.

You should connect external peripherals, such as the keyboard, mouse, and monitor, to the appropriate
port on the server. In case you wish to use a KVM switch, first install it on the rack, and then plug the
monitor, mouse, and keyboard into the KVM switch.

Rack mount modems are modems designed to fit inside a rack-mount case called a chassis. Multiple
modems, up to 48 or so, can be fitted inside a single chassis. Central power supplies and cooling fans
within the chassis support all the modems installed. Each modem can operate independently of the
others in the rack. A chassis can contain a management module in the form of a hardware card that is
installed along with the modems. The management module allows remote management of the individual
modems.

In case you need to make cables yourself, cut the cables to the required length and crimp on the headers
using a crimping tool. You should purchase high quality hardware and cables. Inferior cables can result in
errors that are difficult to diagnose. Locating and removing such cables can be a difficult and expensive
task. You should also ensure that Ethernet segment size standards are not violated. Using a standard to
color-code your cables ensures that the sequence of wires inside the headers remain the same for all
cables.

www.transcender.com

45
CompTIA SK0-002 Study Guide

Ethernet, Cable Types and Connectors, Cable Management, and Rack


Mount Security
Scope
• Discuss the Ethernet standards.

• Discuss cable types and connectors used.

• Discuss cable management.

• Discuss rack mount security.

Focused Explanation
Ethernet Types

Various Ethernet standards exist, characterized by different speeds and cable types. Common Ethernet
speeds are as follows:

• 10 Mbps (megabits per second) for standard Ethernet

• 100 Mbps for Fast Ethernet

• 1000 Mbps (equal to 1 Gbps) for Gigabit Ethernet

• 10 Gbps for 10-Gigabit Ethernet

www.transcender.com

46
CompTIA SK0-002 Study Guide

Broad characteristics of different Ethernet types are indicated in Table 2-1.

Standard Speed Maximum Cable Length (segment) Cable Type

10Base-FL 10 Mbps 2000 m Fiber

10Base-T 10 Mbps 100 m Category 3, 4, or 5 UTP

400 m MMF
100Base-FX 100 Mbps
400 m SMF

100Base-TX 100 Mbps 100 m Category 5 UTP

1000Base-CX 1 Gbps 25 m 150-ohm STP

5,000 m (SMF) SMF or

1000Base-LX 1 Gbps 550 m (50-micron MMF) MMF

440 m (62.5-micro MMF)

220 (62.5-micron MMF)


1000Base-SX 1 Gbps MMF
550 m (50-micron MMF)

1000Base-T 1 Gbps 100 m Category 5, 5e and 6 UTP

10GBase-ER 10 Gbps 40,000 m SMF

10GBase-LR 10 Gbps 10,000 m SMF

10GBase-SR 10 Gbps 65 m MMF

Table 2-1: Characteristics of Ethernet Types

Cables and Connector Types

Most networks use unshielded twisted pair (UTP) cables. An UTP cable contains 4 pairs of wires, each
wire in a pair being twisted around the other. In some cases, coaxial cable is used although the use of
coaxial cable is outdated. Shielded twisted pair (STP) cable is used in some networks. STP cable
includes a shielding band to protect against interference.

UTP/STP Cables

In a twisted pair, two conductors are wound around each other to cancel out electromagnetic interference
(EMI), also referred to as crosstalk. The greater the number of twists, the more the crosstalk is reduced.

www.transcender.com

47
CompTIA SK0-002 Study Guide

UTP is not surrounded by any shielding. This type of cable is commonly used in telephone lines and
computer networking. STP has an outer conductive braid similar to coaxial cables and offers the best
protection from interference. It is commonly used in Token Ring networks.

Category 3 cable: This media type is commonly used for voice and data transmission up to 16 MHz
frequencies. It supports up to 10 Mbps for Ethernet. It consists of eight copper-core wires, twisted into
four pairs with three twists per foot. Category 3 cable is used in ISDN, 10Base-T Ethernet, 4 Mbps Token
Ring, and 100VG-AnyLAN networks; it has also been used in POTS since 1993.

Category 5 cable: This media type handles up to 100 MHz frequencies. It is like Category 3 cable except
it has eight twists per foot. Category 5 cable is used in most Ethernet networks and other networks,
including FDDI, and 155 Mbps ATM.

Category 5e cable: This media type, created in 1998, supports Gigabit Ethernet or 1-Gbps networks. It is
tested to a frequency of 200 MHz. Testing is more stringent with Cat 5e than with Cat 5 and includes
additional measurements, several of which help to improve the cable's noise characteristics. It is used in
Ethernet networks, including Gigabit Ethernet and other networks and 155 Mbps ATM.

Category 6 cable / ISO Class E: This media type supports Gigabit Ethernet. It is tested to a frequency of
up to 250 MHz. This specification provides vast improvements over Category 5e, particularly in the area
of performance and immunity from external noise. It provides backward compatibility to Category 3, 5,
and 5e standards.

Fiber Optic Cables

Single-mode fiber (SMF) optic cable uses a fiber that has a single core with which to transmit data,
meaning only one signal can be sent or received at a time. It has a very small core diameter
(approximately 5 to 10 microns). Signal transmission is allowed at a very high bandwidth and along very
long transmission distances. Signals can transmit approximately 30 miles before distortion occurs. It uses
laser-emitting diodes to create signals.

Multi-mode fiber (MMF) optic cable has a fiber that allows multiple signals to be simultaneously
transmitted and received. It has a larger core diameter (50, 62.5, or 100 microns) than SMF, and it
permits the use of inexpensive light-emitting diode (LED) light sources. The connector alignment and
coupling of MMF is less critical than single mode fiber. MMF’s transmission distances and bandwidth are
less than SMF due to dispersion. Signals can travel approximately 3000 feet before distortion occurs.

RJ-45 Connector

The RJ-45 connector is similar to the RJ-11 telephone cable connector but is larger and accommodates 8
wires. It is commonly used for 10BaseT and 100BaseTX Ethernet connections. It is used on all types of
twisted pair cable, including Category 3, 4, and 5 UTP. Figure 2-1 shows the RJ-45 an RJ-11 plugs.

www.transcender.com

48
CompTIA SK0-002 Study Guide

Figure 2-1: RJ-45 and RJ-11 Plugs

MT-RJ Connector

MT-RJ is a small duplex connector featuring two pre-polished fiber stubs and is used to connect fiber
cables to hardware. This connector resembles the RJ-45 connector. Figure 2-2 shows the MT-RJ
connector.

Figure 2-2: MT-RJ Connector

This connector type is used by both SMF and MMF optic cables.

ST/SC Connectors

Fiber network segments require two fiber cables: one for transmitting data and the other for receiving
data. Each end of a fiber cable is fitted with an SC or ST plug that can be inserted into a network adapter,
hub, or switch.

Note: SC and ST Connectors can intercommunicate with the use of adapters or couplers, but it is best to
use the same type of connector over your entire network.

Figure 2-3 shows the ST and SC type connectors.

Figure 2-3: ST and SC type Connectors

www.transcender.com

49
CompTIA SK0-002 Study Guide

Fiber LC

The Fiber Local Connector (LC) is a small form factor (SFF) connector and is ideal for high density
applications. A Behind-the-Wall (BTW) version with a short connector and boot is available for ultra
compact requirements.

The LC connector has a zirconium ceramic ferrule measuring 1.25 mm in outer diameter (OD) with either
a Polished Connector (PC) or Angled Polished Connector (APC) endface and provides optimum insertion
and return loss. It is used on small diameter mini-cordage (1.6 mm/2.0 mm) as well as 3.0 mm cable.
They are available in pre-assembled or unassembled formats. LC adapters are available in simplex and
duplex configurations and fit the standard RJ-45 panel cutout. They feature self-adjusting panel latches
and a choice of mounting orientations with labeled polarity. It is used in Gigabit Ethernet, video, active
device termination, telecommunication networks, multimedia, industry, and military. Figure 2-4 shows the
Fiber LC connector.

Figure 2-4: Fiber LC Connector

IEEE 1394 (FireWire)

FireWire connectors are not used in network cables. Rather they exist as 4-pin, 6-pin, and 9-pin
connectors that are used to communicate with IEEE 1394 ports, which are high-speed serial ports on
computers. A 6-pin FireWire 400 connector provides a maximum of 7 watts of power when the system is
running. The maximum voltage provided by this connector is 12.8 V (no load) and 7 W power per port.
Figure 2-5 shows the IEEE 1394 connectors.

Figure 2-5: IEEE 1394 connectors

www.transcender.com

50
CompTIA SK0-002 Study Guide

FireWire 800 uses a 9-pin connector and provides around 12.8 V on the power pin. FireWire devices with
6-pin or 9-pin connectors carry power from a FireWire bus. However, 4-pin FireWire connectors cannot be
powered by a FireWire bus and need to be powered separately.

An IEEE 1394 port can support up to 63 devices by daisy chaining one to the other.

www.transcender.com

51
CompTIA SK0-002 Study Guide

Universal Serial Bus

Universal Serial Bus (USB) connectors are not used on network cables. USB is a high-speed serial port
provided on computers. USB connectors are available as type A, which are commonly plugged into
sockets on the computer, and type B, which usually plug into sockets on external devices. Figure 2-6
shows the USB connectors.

Figure 2-6: USB Connectors

A USB port can support up to 127 devices by daisy chaining one to the other. However, the signal must
be regenerated using a USB hub every four devices or so.

Cable Management

Any server requires cabling, both internally and externally. Computer network cables are groups of
network cables between the server, wall jack, and the wiring closet. These cables are called horizontal
runs. Different wiring closets are connected together using a backbone cable. The backbone cable runs
vertically between floors and horizontally across rooms. Where the backbone cable crosses a room, it
usually runs through the plenum or the air space above the false ceiling. Cables can run through
conduits, which are metallic or plastic pipes meant for routing cable. J hooks are sometimes used to
support cables as they run along walls in the plenum. J hooks are metallic hooks that are fastened to
walls to support and route exposed cables.

Ultimately, a lot of cable must enter the equipment rooms where servers are kept. Cable can be
positioned across walls supported by J hooks. Ladder trays are another means of routing large bunches
of cable around a server room. Ladder trays are elevated plastic trays approximately one foot wide.
Supported by vertical members, ladder trays form an elevated carriageway for cable runs, allowing cables
to be kept off the floor.

Raceways are a kind of conduiting system that is mounted on the surface of walls, rather than within
them. Raceway channels are fastened to walls, and the cables are placed in them. Snap-fit covers are
used to protect the cables and provide a neat finish.

www.transcender.com

52
CompTIA SK0-002 Study Guide

Rack Mount Security

Rack mount security involves protecting the rack and the contents of the rack. The first level of security is
the door to the server room itself. The door should have some access security, such as a swipe-card
reader or some other form of security restricting entry to the server room. The lockable doors on the rack
should also be locked.

Restricting access to the server room and to the servers within the rack has a two-fold benefit: it prevents
both malicious tampering and general curiosity. Do not forget that an otherwise well-intentioned, but
curious user can wreak havoc with server settings.

www.transcender.com

53
CompTIA SK0-002 Study Guide

Power On Self-Test and Server Room Environment


Scope
• Discuss the power on self-test.

• Discuss the server room environment and security.

Focused Explanation
Verifying Power on Sequence

After installation of components, the power on self-test (POST) may reveal errors due to configuration or
incompatibility issues. In case of errors, POST will usually display error messages on the screen, allowing
the user to intervene and fix the problem. All errors reported by POST must be addressed before allowing
a server to boot.

POST errors may be reported using a line of text displayed on the monitor or using beeps from the
computer speaker. Computer BIOS are supplied by a few vendors that include AMI, Award, IBM, and
Phoenix. The exact error message varies according to the type of BIOS used. Table 2-2 lists the typical
POST error indications.

BIOS Type Error Code Problem/Solution

Most types 1 beep Memory-related (or possibly motherboard-related)


error. Investigate memory error first. Then check
2 beeps for correct motherboard installation.

3 beeps Swap memory or motherboard.

AMI 4 beeps Timer error. Check for correct motherboard


installation.

Swap motherboard.

Most types 5 beeps Motherboard-related error. Motherboard is faulty.

7 beeps Swap motherboard.

AMI 6 beeps 8042 gate A-20 error. Check for correct


motherboard installation, CPU installation, and
whether keyboard is operational.

Swap motherboard.

Table 2-2: POST Error Messages

www.transcender.com

54
CompTIA SK0-002 Study Guide

BIOS Type Error Code Problem/Solution

AMI 9 beeps ROM-related motherboard error.

Swap motherboard.

AMI 10 beeps CMOS Shutdown Register Read/Write error.


Check CMOS battery (replace if necessary). Swap
motherboard.

AWARD BIOS ROM checksum error Swap motherboard.


message

AWARD CMOS battery failed message Replace CMOS battery.

AWARD CMOS checksum error – Problems with the ROM chip or CMOS battery.
Defaults loaded message Checksum errors indicate that the CMOS has
become corrupt. The system will load the default
configuration when it encounters a checksum
error. This is typically caused by a weak CMOS
battery and can be corrected by replacing the
CMOS battery.

AWARD Display switch incorrectly set This is a legacy setting. The monochrome/CGA
switch setting on the motherboard is incorrectly
configured. Refer to motherboard documentation,
and correctly set the switch.

Phoenix 1-3 beeps CMOS RAM Read/Write error. Check CMOS


battery, and replace if necessary.

Phoenix 1-1-4 beeps ROM checksum error. Swap motherboard.

1-2-2-3 beeps

Phoenix 1-2-1 beeps Timer error.

Phoenix 1-2-2 beeps DMA error. Check for correct motherboard


installation. Swap motherboard.
1-2-3 beeps

3-1-1 beeps

3-1-2 beeps

Table 2-2: POST Error Messages

www.transcender.com

55
CompTIA SK0-002 Study Guide

BIOS Type Error Code Problem/Solution

Phoenix 3-1-2 beeps Interrupt-related errors. Check for correct


motherboard installation. Swap motherboard.
3-1-4 beeps

4-2-1 beeps

4-2-4 beeps

2-2-3-1 beeps

Phoenix 4-3-4 beeps RTC error. Replace CMOS battery.

Swap motherboard, if necessary.

IBM Continuous beep PSU or system board error. Check PSU for
correct operation. Check for correct motherboard
Repeating sort beeps installation.

1 long 1 short beep Swap motherboard.

IBM 1XX message System board errors. Check for correct


motherboard installation. Swap motherboard.

Table 2-2: POST Error Messages

Some adapters have a hot-key sequence that must be pressed to enter a configuration program. For
example, pressing CTRL+A invokes the setup utility for Adaptec SCSI adapters.

All computers have a CMOS setup utility that is commonly invoked by a hotkey combination. Immediately
after a server is switched on, there is a limited amount of time to press a hotkey combination. When the
hotkey is pressed, the computer invokes a setup program that allows you to configure the computer. The
Del key is the most common hotkey combination. Other hot key combinations include CTRL+ESC, F10,
and F2. Some servers use a special floppy disk or a CD-ROM to invoke the setup program.

In modern servers, most of the configuration in the CMOS setup is automatically carried out whenever the
computer discovers new components.

www.transcender.com

56
CompTIA SK0-002 Study Guide

Server Room Environment

An ideal server room environment should include the following characteristics:

• Space and cleanliness: The server room should not be overcrowded. Limited space can be stretched
out using racks or blade servers. Keeping the room clean and organized will help promote efficiency
at work. Access to the server room should be controlled. The door to the server room should have an
access control device, such as a smart card reader or at minimum a lock. Biometric devices provide
high security but at a substantial cost increase over other options. Individual racks should be kept
locked.

• Controlled temperature, humidity, and static conditions: Like most computer equipment, servers
require controlled temperature and humidity. High temperatures are detrimental to electronic
components. Over-dry conditions promote ESD. A solution for this is to use air conditioning so that
both temperature and humidity are brought under control. The server room floor should not have a
carpet to limit ESD. Any insulating material on the floor will promote ESD. Antistatic mats and
antistatic sprays should be liberally used.

• Every server within the server room contributes to the heating load of the room. The heating load
must be compensated through air conditioning so that the server room temperatures are maintained.
The unit of heat is the British Thermal Units (BTU). Summing up the amount of heat in BTU/hr given
off by each piece of equipment in the server room allows you to calculate the total cooling needs of
the room. Multiplying the BTU/hr by 0.000083578 gives the required air conditioning capacity in tons.

Alternatively, the air conditioning requirements can be approximated from the power consumption of
each piece of equipment. The power consumed by any electrical equipment is the product of the
mains supply voltage (V) and the current in amperes (A) drawn by the equipment. A server drawing
20 A consumes 20 A x 110 V = 2200 W. Multiplying the power consumption in watts of all the
equipment in the server room by 0.000285 yields the approximate air conditioning requirements in
tons.

• Availability of conditioned and uninterrupted power: The server room should use only conditioned
power. The use of surge and spike suppression equipment is recommended. There should be a UPS
of adequate ratings to supply the connected load for a specified duration. The duration of backup for
the UPS is calculated from historic power outage patterns. A standby power supply in the form of a
generator is often used.

• Fire protection: Fire poses a very high risk for a server room for several reasons. Most importantly,
the equipment within a server room is very expensive. Secondly, the loss to the organization due to
unavailability of servers affected by fire is usually very high. Some form of fire suppression equipment
is normally required in a server room. Water sprinklers are usually not used due to the presence of
electrical connections in the servers. Halon-based extinguishers are an option, but halon can be fatal
for humans and is ozone depleting. Halon alternatives, such as FM 200, offer an environmentally
friendly and non-toxic fire suppression method.

www.transcender.com

57
CompTIA SK0-002 Study Guide

Review Checklist: Installation and Configuration of Server


Components
Discuss pre-installation activities.

Discuss installation best practices.

Discuss the Ethernet standards.

Discuss cable types and connectors used.

Discuss cable management.

Discuss rack mount security.

Discuss the power on self-test.

Discuss server room security and environment.

www.transcender.com

58
CompTIA SK0-002 Study Guide

Configuration

www.transcender.com

59
CompTIA SK0-002 Study Guide

Firmware and RAID Configuration


Scope
• Discuss firmware.

• Discuss RAID configuration.

Focused Explanation

Computers contain a special class of software called firmware. This kind of software is not normally
supplied or stored on hard disk. Instead, it is stored on a special kind of memory called read-only memory
(ROM). The computer often needs to execute firmware before the operating system has loaded. At this
stage ROM is readable by the computer, but disk drives are not. Examples of such firmware include the
computer basic input/output system (BIOS), which includes diagnostic tests and hardware device-drivers
stored in ROM. It also includes firmware for certain plug-in hardware components, such as RAID
controllers, which require an operating system-independent store of device drivers.

RAID

RAID provides reliability through redundancy. In case of failure of a drive in the RAID array, the drive can
be replaced, and the data rebuilt from redundant data stored on the other drives.

Drives used in RAID arrays are as follows:

• Cold spare: is a drive that is not installed anywhere but kept to be used when required.

• Warm spare: is a drive that is installed in the RAID enclosure but is not powered up. In case of a
single drive failure in the RAID array, the warm spare can be started up, and data rebuilding can
start.

• Hot spare: is a drive that is installed in the RAID enclosure, and is powered up, although it is not
used under normal circumstances. In case there is a failure of a single drive, the RAID controller
switches over smoothly to the hot spare, and starts rebuilding the data on the drive.

• Hot swap: is a drive that can substitute a defective drive in a RAID array without powering down
the array. The replacement process consists of pulling out the defective drive and inserting the
hot swap drive in its place. The RAID controller automatically detects and starts rebuilding the
data and using the drive.

• Hot plug: is a drive that is installed in a RAID array to augment the array. Electrically, the hot plug
drive is identical to the hot swap drive. There is no real difference between the two except the
name. Just as in the case of hot swap, the RAID array does not need to be powered down for the
hot plug drive to be plugged in.

• Global hot spare: is a drive that is a hot spare, and can take the place of a member drive that fails
in any logical drive within the RAID array. On failure, the global hot spare joins the logical drive,
and data rebuild starts automatically.

www.transcender.com

60
CompTIA SK0-002 Study Guide

• Dedicated hot spare: is a drive that is a hot spare, and can take the place of a member drive that
fails in a specified logical drive within the RAID array. On any failure within the specified logical
drive, the dedicated hot spare joins the logical drive, and data rebuild starts automatically. A
dedicated hot spare is also called a local hot spare.

A logical drive is an array of independent drives on a computer. Each logical drive can be a RAID array.
Logical drives configured to the same or different RAID levels are further combined into logical volumes in
a computer. For a RAID system, the operating system file system exists at the level of logical volumes or
partitions of logical volumes.

A hardware RAID controller contains cache memory to improve the read/write performance of the RAID
array. Every disk storage device, including RAID, has some latencies or delays associated with its
operation. The latencies of disk devices are usually due to mechanical and electrical characteristics of the
device. In any case, the latencies make the device slower than the transfer rate of the interface.

Cache offers a way to overcome some of the latencies of disk storage. The cache consists of RAM that
buffers read and write operations. When a disk write is desired, the CPU transfers a block of data into the
cache. Because the cache is fast, the write process finishes quickly. The CPU then releases the bus, and
turns its attentions elsewhere. The cache writes its contents onto disk at leisure, without tying up the bus
or the CPU. Such a caching scheme is called write-back caching.

Cache provided with RAID, however, introduces its own problems. If the RAID setup were to fail before
the cache can commit its contents to disk, then the data would stay in the cache. When the array is
powered down for repairs, the cache contents would be lost. The large amount of cache provided with
RAID implies that the data loss in such an event would be substantial. For critical data that cannot be
regenerated, RAID cache can be turned off for write operations, providing what is called write-through.
Here, the write operations for cache and disk proceed in parallel. The speed benefits of cache are
obviously not available when the cache is disabled.

To improve reliability, RAID cache can sometimes be battery backed. In case of failure, the battery
maintains power to the cache, enabling it to retain data until the RAID is powered up again.

Read operations using cache do not pose a danger of data loss, and cache is always enabled for reading.

RAID Controller Cards

External RAID controllers are contained within the external housing for RAID. The communication link
between the RAID enclosure and the computer is either a SCSI bus or a fiber channel bus. The
bandwidth of the SCSI bus sometimes bottlenecks the performance of RAID, although this is less of a
problem in fiber channel given its greater bandwidth.

Internal RAID controllers are meant to be installed within the server case. In this case, the data from the
RAID controller flows directly to the PCI bus, and bandwidth is not an issue.

RAID controller cards contain firmware routines to create arrays and define logical drives and volumes.
When a drive fails within an array, RAID keeps working, although it slows down. The controller card
contains the firmware to rebuild data after a failed drive has been replaced.

www.transcender.com

61
CompTIA SK0-002 Study Guide

In both cases, the RAID controller allows you to create RAID partitions. The RAID controller hides the
details of RAID partitions. The operating system uses the RAID array as a single storage entity, and is
usually unaware of the underlying structure of the RAID array.

A RAID partition is operating system independent, and can be used by any operating system as long as
the operating system supports the RAID hardware. An operating system partition, in contrast, is created
by the normal operating system partitioning tool, such as FDISK. An operating system partition is specific
to the operating system and cannot necessarily be used by any other operating system.

Note: In the case of software RAID, the operating system itself creates the underlying partition structure
of the RAID. However, for normal operations, the operating system hides the underlying RAID partitions
and treats the RAID array as a single disk or partition. The RAID utility software provided with the
operating system must be used to create and destroy software RAID partitions.

www.transcender.com

62
CompTIA SK0-002 Study Guide

UPS and External Device Installation


Scope
• Discuss UPS configuration practices.

• Discuss installation considerations for external devices.

Focused Explanation

UPS Installation Practices

A UPS, as discussed before, is a backup power supply. A server room may have one or several UPSs
supplying power to it, depending upon the size of the server room and the reliability of power required.
Before installing a server, the installed capacity of a UPS should be verified in terms of load supported
and duration of backup provided by the UPS.

To install an extra UPS, you need to perform the following steps:

• Identify a location for the UPS. The location should be within the server room, or near it, so that
power lines carrying UPS power do not have to stretch far. There should be incoming AC power
lines available to supply the UPS. The usual environmental conditions required for servers apply
to UPSs as well: controlled temperature and humidity are desirable for UPSs.

• Ensure that all equipment needing to be connected to the UPS is powered down. Plug the UPS
into the power outlet. Then attach the server and other devices requiring power to the UPS.

• Connect the signal lead from the UPS signal port to the server serial port. UPSs continually
monitor the state of their batteries and communicate this state to the server through a serial cable
connecting the UPS and the server. Vendor-supplied software loaded on the server monitors the
signals sent by the UPS. When the batteries reach certain thresholds, the software initiates an
orderly shutdown of the server. These thresholds are often configurable.

• Start up the UPS, and then start up the server. Load the vendor-supplied software utility on the
server.

Backup Devices and Storage Subsystem Installation Practices

Backup devices and storage subsystems are usually ATA or SCSI disk drives and tape drives. These are
installed as per practices used for ATA and SCSI devices, and the same considerations apply.

For ATA devices, ensure that master and slave settings are correct. For SCSI devices, ensure that the
ends of the SCSI bus are terminated, and that all SCSI devices have a unique ID.

For backup devices that use a proprietary adapter, plug the card into a compatible slot on the server
motherboard using the normal practices used for installing internal components. The documentation that
accompanies the proprietary adapter will indicate any special installation considerations.

www.transcender.com

63
CompTIA SK0-002 Study Guide

Installation Considerations for External Devices

Installing external devices requires different procedures, depending upon the nature of the bus or port
being used. Any external bus that is hot-pluggable, for instance, USB or IEEE 1394, allows you to simply
plug the device into a running server. Most other buses require you to shut down the server before you
attach the devices. This applies to ports as well. Although you can often get away with hot plugging
devices into peripheral ports, like the keyboard, mouse, printer, and serial ports, this is not a
recommended practice. Shutting down the server, connecting equipment, and restarting is the approved
procedure.

With hot-plug PCI, the slot can be powered off through a switch. You can then plug the adapter card or
device into the slot, and power up the slot through the switch. The switch, sometimes referred to as a
circuit breaker, allows you to use normal PCI devices and adapters in the PCI slot. Another type of hot-
plug PCI slot allows you to simply disconnect or connect adapters and devices from the slot. This kind of
hot-plug technology requires the use of special peripheral devices that are rated for hot insertion and
removal.

With all devices, ensure that the maximum cable lengths are not being violated. Where devices are
independently powered, ensure that they are connected to a power source and switched on.

With SCSI devices, ensure that device IDs are set correctly, and that the ends of the bus are terminated.

As a part of a successful installation process, many devices will require drivers to be loaded after the
server has booted up. A Plug-and-Play (PnP) operating system will detect most new devices and load
drivers automatically. You may have to insert a vendor-supplied CD or floppy disk to supply the latest
drivers to the operating system. A non-PnP operating system will require you to install drivers manually by
running an installation program. This method is sometimes used on a PnP operating system as well as on
vendor recommendation. The documentation accompanying the device will indicate the best installation
process.

www.transcender.com

64
CompTIA SK0-002 Study Guide

NOS and Service Tool Installation


Scope
• Discuss NOS and service tool installation practices.

Focused Explanation

NOS Installation Practices

The common NOSs in use include Microsoft Windows Server, Novell NetWare, Unix and Linux variants,
and MAC OSX.

Each operating system has its own features and the installation steps vary with each. The following must
be adhered to for an installation:

• Ensure compatibility of the NOS with the hardware platform on which you are installing. The
hardware and NOS documentation will indicate the compatibility. In general, some NOSs may be
installed only on proprietary hardware. Instances of this are as follows:

o Solaris, a Unix variant from Sun Microsystems, is only installable on proprietary Sun
workstations.

o AIX, a Unix variant from IBM, is only installable on IBM proprietary workstations.

o MAC OSX, an NOS from Apple computer, is only installable on proprietary Apple MAC
computers.

o Windows, NetWare, and most variants of Linux and Unix are installable on most IBM PC
type computers. The hardware compatibility list (HCL) included in Windows
documentation is a listing of hardware that has been tested to work with Windows.
Different HCLs are provided for different variants of Windows.

• Ensure that you have the latest releases of the NOS software. The latest releases are likely to
support the latest hardware available and provide all the latest drivers required. Most NOSs
require you to purchase client or server licenses. Ensure that an adequate number of licenses
have been acquired.

• Ensure that the server on which you are installing has a CD-ROM drive because most software is
supplied on CD. Some installations may require you to boot using the CD. In that case, you need
to specify the CD-ROM as the primary boot device in the CMOS setup utility.

• Ensure that the server that you are setting up has a network card installed. You need a TCP/IP
address to assign to the NIC, along with the subnet mask and a default gateway address. If a
DHCP service exists on the network, it can be used instead to automate the TCP/IP
configuration.

www.transcender.com

65
CompTIA SK0-002 Study Guide

• In case the NOS is being upgraded or otherwise reinstalled, make sure that the data and
configuration of the server is backed up. In the case of an unsuccessful reinstall or upgrade, a
backup provides a convenient means of rolling back to the old setup.

After the installation is over, the tasks that you need to perform are as follows:

• Update the operating system. The operating system needs to be updated with the latest service
packs, patches, and upgrades. The installation CD, no matter how recent a release, cannot be
expected to have the very latest drivers and service packs. These are available on vendor Web
sites and must be downloaded and applied to the installation individually. As a precaution, the old
drivers should be backed up or otherwise preserved, where possible. Preserving old drivers
allows you to roll back an installation in case an upgrade causes problems.

• Before deploying the server in a business environment, test the installation. The testing may
include using the server in a simulated environment. Alternatively, the server may be tested in a
live setup but in a non-critical role.

Documenting the Installation

Documenting the changes during and after the installation is a very important step. The server
installation documentation allows malfunctions to be taken care of immediately. Keep in mind that
downtime for a server is much more critical than an ordinary computer because multiple people depend
on the server.

The items to be documented include the names, versions, and installation dates of software, including the
operating system installed on the computer. The basic configuration of the computer, including the IP
settings, CMOS settings, and computer resource settings, such as IRQs and ports assigned to hardware,
should be recorded, along with all driver names and versions. All installed hardware, including plug-in
cards and their slot numbers should also be recorded. It is also important to document the details of the
composition of RAID arrays, in other words, which drives are included in which array, and which RAID
levels are used.

Service Tools

Service tools are useful for performing management and maintenance tasks on the server. Usually many
tools that are bundled in the NOS installation CD, are installed as a part of the installation process of the
NOS. Other tools can be installed after the NOS installation is over.

Common tools are as follows:

• Simple Network Management Protocol (SNMP): This protocol and service is used to collect status
information from network devices, and thus monitor the status of the devices and the status of the
network in general. Devices that are SNMP-enabled are called managed devices. Typical
managed devices include hubs, switches, routers, printers, and servers. Some managed devices
allow remote configuration changes and diagnostics through SNMP.

A piece of software called an SNMP agent runs on every managed device. A local database
called a Management Information Base (MIB) defines the nature of data collected by the agent.

www.transcender.com

66
CompTIA SK0-002 Study Guide

Control software, called the Network Management System (NMS) runs on a central computer,
which is also referred to as the network management station. The NMS polls agents turn by turn
in a repeated pattern, and collects data relating to status, configuration, network statistics, and
errors, from each agent. The NMS can store and display the information thus collected in a useful
manner.

• Backup software: used for backing up and restoring data. Different backup applications provide
different functionality. Most allow some form of scheduling and allow backup for the entire
network from a central point.

• System monitoring agents: used for gathering data, such as CPU, memory, network and disk
usage, and logon attempts.

• Event logs: databases of events recorded by system monitoring agents. Logs can be displayed
as tables of data, and sometimes as pictorial graphs.

www.transcender.com

67
CompTIA SK0-002 Study Guide

Server Baseline and Management


Scope
• Discuss server baseline.

• Discuss server management.

Focused Explanation

Create a Server Baseline

A server baseline is a comprehensive performance log of the server at a particular time. By recording
different server baselines at different times, you can create a performance profile of a computer. This
helps to predict the effect of loads on servers and make upgrade decisions. For example, baselines of a
server taken when five people are logged on and when 100 people are logged on can be examined. The
difference in the baselines helps you to predict the performance with 200 people logged on. In addition,
baselines also help quantify the effects of installing and running software on the server.

The first baseline of the server should be performed when the server has just been installed, but
application programs are not yet installed. This baseline is performed with no users using the server.
Another baseline should be performed after applications have been installed. Further baselines must be
performed from time to time as a part of monitoring the performance of the server and as a part of
preventive maintenance. Some of the parameters that are recorded in a baseline include CPU utilization,
memory utilization, network utilization, disk utilization, and frequency of swapping.

Server Management

The goal of server management is to substitute human intervention with software to control servers in a
network. The aim of server management is to reduce human effort and simultaneously reduce chances of
error. In addition, server management would provide the benefits of reduced training needs, reduced
security risks, and integrated disaster recovery.

Server management includes several classes of tasks:

• System management: enables a central console to start and stop processes on a remote server.
In fact, system management should allow:

o Starting and stopping multiple processes on multiple servers using a single command at
the central console, thereby eliminating one-at-a-time commands.

o Monitoring and changing configurations of servers.

o Updating applications, and applying firmware upgrades, service packs, patches, and
hotfixes.

o Automating responses, such as restarting services and rebooting of servers.

www.transcender.com

68
CompTIA SK0-002 Study Guide

• Fault management: enables fault notification. In case of any failure in a server, fault management
sends out a message through a message gateway. The message can be e-mail, SMS, or a
pager message that is addressed to a person or persons listed in the configuration of the fault
management system. The system could also identify danger of imminent failure in components
and send out a warning. The fault management system could provide a remediation console that
can be used to run diagnostics, such as trace route and port scans.

The concept of in-band and out-of-band management is important from the viewpoint of fault
management. In-band management transfers signals on channels that are used for normal
information transfer. Out-of-band management transfers signals on channels that are not used for
normal information transfer. Out-of-band management is useful in troubleshooting situations
because a fault may tie up the in-band channels. In the context of server management, in-band
management is usually performed using a Web browser, whereas out-of-band management is
usually performed using a command-line interface, such as a telnet window.

• Data collection: collects data on status of devices, and logs the data. Areas of interest in data
collection include availability, performance and uptime of servers, trend monitoring and reporting,
and hardware and software inventory.

• Change Management: implements a process that deals with collecting, approving, and
implementing changes in the server setup and network infrastructure. Change management
depends on rigorous documentation that attempts to record the state of servers and the network
on an ongoing basis. One key aim of server change management is to eliminate the tendency of
administrators to document changes that they have made to servers “in their heads” . This is
risky because human memory is very variable in terms of reliability, and the details of changes
are lost over time when administrators change jobs or roles. A typical change management
process may include the following tasks:

o Capturing desires, requests, and perceived need for change.

o Reviewing the changes to filter out changes that will not be implemented. A change may
be approved for implementation through management review, through peer review, or
through an individual judgment call. The review generally considers the benefits and the
risks associated with implementing the change.

o Documentation of the change including implementation plan and timelines.

Change management uses planning to eliminate requests for unnecessary changes and to
minimize undesirable side-effects of changes, such as increased user complaints that stem from
hasty implementations. Server change management also documents all changes so that future
changes can be made in a predictable manner. Additionally, server change management
provides a method of smoothly rolling back changes in case the changes do not provide desirable
results. Changes that impact security are accorded special attention in change management
processes. Many change management processes use boundary-checking. This reveals whether
any proposed changes will compromise established security policies in the organization.

www.transcender.com

69
CompTIA SK0-002 Study Guide

Server management software is available to carry out many of the tasks listed above. Unfortunately no
single server management system provides all the functionality listed, and certainly no software exists
that is completely hardware or software independent. Many organizations end up using a solution that ties
them down to a single hardware vendor and perhaps forces them to use a single operating system
platform. Some industry efforts have resulted in collaborative standards that still fall short of the goal of
managing servers in a vendor-independent fashion.

The following are some of the commonly used server management software:

• Intelligent Platform Management Interface (IPMI) version 2.0: an effort by Dell, HP, Intel, and
NEC. IPMI assumes that all servers incorporate the Baseboard Management Controller (BMC),
which is a monitoring chip included in most new servers. IPMI provides an interface to the chip.

• Systems Management Architecture for Server Hardware (SMASH): proposed by the Distributed
Management Task Force (DMTF). SMASH is based on Web-Based Enterprise Management
Standard (WBEM) and is intended to be a platform-independent standard. However, WBEM itself
was released in 1998, and the vintage of WBEM means that it does not use many conventions
that are more modern. In addition, SMASH is a standard for developers; applications are not yet
available.

• Web services for Management Extension (WMX) proposed by Microsoft: relies on Simple Object
Access Protocol (SOAP). WMX is a protocol that uses Microsoft’s web services specification and
existing security models.

Change management is a process and can be accomplished without specific software tools. However,
software tools can help in change management. Such software includes OnDemand from IBM and
Adaptive Enterprise from HP. These products are actually frameworks, consisting of both software and
consultancy services from the vendors.

Implementing the Server Management Plan

Most organizations have a server management plan in place. Your installation and configuration activities
must be carried out within the context of you organization’s server management plan.

For any changes, including installation of servers or changes in configuration, that you intend to
implement, ensure that you comply with the plan. Specifically any changes that impact security must be
assessed carefully to manage the risks of compromised security.

For change management, the necessary activities may include creating a proposal, engaging with
management and peers in reviews and getting approvals, creating a plan, implementing the plan, and
creating the necessary documentation.

Apart from change management, the other tasks of server management may include installation of
management software, using the software for system management, fault management, and data
collection.

www.transcender.com

70
CompTIA SK0-002 Study Guide

Review Checklist: Installation and Configuration of Server


Components
Discuss firmware.

Discuss RAID configuration.

Discuss UPS installation practices.

Discuss installation considerations for external devices.

Discuss NOS and service tool installation practices.

Discuss server baseline.

Discuss server management.

www.transcender.com

71
CompTIA SK0-002 Study Guide

Upgrading Server Components

www.transcender.com

72
CompTIA SK0-002 Study Guide

Performing Backups and Adding Processor


Scope
• Discuss backups as a pre-upgrade best practice.

• Discuss adding processors.

• Discuss adding hard drives and memory.

• Discuss upgrading BIOS/firmware. Discuss upgrading adapters and peripheral devices.

• Discuss upgrading system monitoring agents, service tools and a UPS.

Focused Explanation
Perform Backups

Before performing any upgrade to a server, back up the server to avoid permanent loss of data. The data
in this context is both business data and server configuration data.

Note: Backups as a part of regular operations is discussed in the section entitled Backup in this chapter.

Most network operating systems (NOSs) allow you to save configuration data for the server on a set of
floppy disks. This emergency rescue disk (ERD) enables you to perform an emergency boot and carry out
software repairs on the server in case the operating system crashes. The ERD is also referred to as a
rescue or recovery disk. Before carrying out an upgrade, update the ERD by running the operating
system-supplied utility for this purpose.

You should also consider backing up the business data on the server especially if the server contains
important data that may not be recoverable through the regular organizational backup system.

For both the configuration data, as well as the business data, verify that the backup is usable before
proceeding.

Add Processors

Adding processors is a common method of increasing the performance of a server. Newer processors are
released with unfailing regularity from processor manufacturers. There are two basic scenarios where a
processor is added. You can replace the current processor with a newer, faster processor, or you can add
additional processors to provide symmetric multiprocessing (SMP).

Symmetric multiprocessing is a technique where a software process is executed on multiple processors


using shared memory within a single computer. To use SMP, the operating system must support it.
Common operating systems that support SMP include Unix, Linux, Mac OSX, Windows 2000 Server, and
Windows Server 2003. The version of the operating system may dictate the maximum number of
processors that it supports.

www.transcender.com

73
CompTIA SK0-002 Study Guide

Carry out the following tasks before adding the processor:

• Verify compatibility of the processor with the motherboard. The documentation of the
motherboard will indicate whether the new processor is suitable. An obvious incompatibility is
different packaging for the new processor that could prevent the processor from being plugged
into the motherboard socket. The documentation of the processor may also reveal this and other
details that make the processor incompatible as a replacement. Some motherboards may need
jumper settings to be changed to support added processors. The motherboard documentation will
contain details of any jumper settings that may be required.

• For upgrading servers to SMP, verify:

o that the operating system supports SMP. You may need to download an operating
system patch or service pack to support SMP. In some cases, a different version of the
operating system may need to be installed. The operating system documentation will
indicate the number of processors that it can support.

o that processor speed and cache size match exactly. In general, SMP will work only if the
processors are matched closely in terms of the date of manufacture. However, the
processor speed and cache size must match exactly.

o that the stepping level of the processor is the same. The stepping level of a processor is
a minor version number indicated by an alphanumeric code. For processors of the same
type number, the stepping level changes as improvements are made over time. For small
changes, the numeric part is incremented. For more significant changes, the alphabetical
part is changed. For SMP applications, the stepping level for the processors used should
ideally be the same or at least not too far apart. The stepping level is also called the N1
stepping of a processor. Every processor has a printed or laser-etched label that
indicates the processor name, the model, the stepping level, and so on. Verifying the
stepping ensures that the stepping level of the processor to be added matches that of the
processor already installed on the motherboard. For example, current Pentium 4
processor steppings include C0, D0, and B1.

o that the server case ventilation is able to cool the computer after the new processor is
added. Processors tend to run hot, and an extra processor inside a server casing can
push temperatures up considerably. Consider putting in an extra fan, if necessary.

• Upgrade the BIOS. The Web site of the motherboard manufacturer will indicate whether a BIOS
upgrade is necessary and will provide the binary files required to flash the BIOS.

www.transcender.com

74
CompTIA SK0-002 Study Guide

Best practices for upgrading a processor are as follows:

• Obtain all software upgrades and patches in advance. Make sure you have the appropriate driver
for your NOS.

• Read all you can on frequently asked questions (FAQs), issues, and precautions for adding
processors. Internet discussion forums and vendor Web sites are a valuable source of
information.

• Test and pilot the upgrade. Before attempting a CPU upgrade on a live server, pilot the upgrade
on another computer. For this purpose, select a server that matches the hardware configuration
of the live server. While performing the pilot upgrade, document the steps. You should install an
operating system that matches the live server so that the pilot setup can be tested.

The pilot setup must be tested to ensure that all problems are discovered and sorted out before
the live server is upgraded. If possible, test out the server in a live, but non-critical role. This
nature of testing is sometimes more useful than testing under simulated conditions.

It is especially important to test the live server after the upgrade before deploying it in a business
environment. The testing should ensure that the upgraded server is working correctly and that
there are no detectible instabilities or errors.

• Schedule the downtime for upgrading the processor on the target server so that inconvenience to
users is minimized.

• Take ESD precautions when actually opening the case and adding the processor.

After the computer has been restarted, the NOS may need to be upgraded with the latest patches or
service packs to support the processor. Install the latest drivers and operating system updates on the
server. Two tasks remain:

• Confirm that the operating system has recognized the processor. This is done by booting the
computer and using any operating system provided utility that reports processors. In Windows, for
example, Device Manager reports the processors installed.

• Document the addition of the processor.

www.transcender.com

75
CompTIA SK0-002 Study Guide

Adding Hard Drives and Memory


Scope
• Discuss adding hard drives.

• Discuss adding memory.

Focused Explanation
Add Hard Drives

Adding hard disks increases the storage capacity of the server. Hard disks need to be added under two
generic circumstances: when replacing a drive that has failed, or when upgrading the storage capacity of
a server.

For adding drives, there are several steps that you should complete. At the onset, consult maintenance
and service logs. These will indicate problems and issues that have been faced in the past, along with
their resolutions. These will help you add disks with a minimum of problems. Verify that the drives are the
appropriate type. The drives to be added must use the same interfaces, ATA, SCSI, or SATA as provided
on the server. Cross-brand incompatibilities of drives are non-existent these days, although this was an
issue for early IDE drives.

To accommodate high capacity drives, a BIOS upgrade is sometimes necessary. The firmware on the
SCSI card or the RAID controller may also need upgrading before adding disks. Vendor Web sites are a
valuable source of information and advice in this regard. The documentation that accompanies the hard
disk is also a very good source of upgrade-related information.

Ensure that the interface can accommodate the drive. For instance, each ATA interface can
accommodate only two drives. Similarly, SATA can support one drive per interface. Port multipliers can
be used if all available interfaces are in use . SCSI can accommodate 7 or 15 drives per adapter. For
RAID, the controller must support the addition of drives.

Verify the cabling and termination requirements for SCSI disk additions. Each drive on the SCSI chain
must use a unique ID. The last and first devices on the bus must be terminated. For ATA, the drives on a
single interface must be designated master or slave. Bus length constraints must not be violated.

You must also take ESD precautions while opening equipment for adding hardware. Mount the drives in
the server or the RAID enclosure. Attach the cables, and restart the server. Use operating system tools to
partition and format the drive. Document the upgrade.

To upgrade mass storage capacity in a RAID setup, drives can be either added or swapped. Swapping is
necessary when an existing drive must be substituted for maintenance reasons or for capacity
enhancement reasons. For adding drives, ensure that the RAID controller can accept the additional drive.
In case a drive has to be swapped, power off the RAID array, and remove the drive to be substituted.
Insert the new drive, and start up RAID. If the new drive is a substitute for a failed drive, the RAID
controller automatically starts rebuilding data on the drive. In other cases, you may have to integrate the
drive into the RAID solution manually, specifying how the new drive is to be used. Invoking utilities

www.transcender.com

76
CompTIA SK0-002 Study Guide

provided in RAID firmware, the drive can be made part of an array. The RAID controller usually presents
the RAID array to the operating system as a single disk partition or a single disk. Operating system
utilities, such as partition, format, or disk management, must be used to make the RAID array usable
by the operating system.

A RAID setup can be expanded or extended by adding disks. Adding a disk and creating a new RAID
partition expands the RAID storage available. The operating system can use the added capacity as a
drive or partition. In the case of extension, the operating system may show extra unallocated space
available at the end of an existing drive.

As a concluding step, update the server documentation to capture the changes that you have made.

For adding disks to a server or upgrading RAID mass storage:

• Locate the latest drivers, operating system updates, and so on prior to the installation. This ensures
that you have all the update software handy, just in case you require them.

• Review all documentation including FAQs, drive vendor instructions, and experiences of others. This
gives you a forewarning of what to expect during the installation. Of special importance are
maintenance and service logs. These will indicate issues that have been handled in the past.

• When the server of the RAID setup to be upgraded performs a business-critical role, test and pilot the
installation using a server with matching hardware. This will help iron out issues before the real server
is shut down for the upgrade. The pilot may reveal the need to upgrade the BIOS, either in the server
or in the RAID controller, for example.

• Plan the installation, and schedule the downtime to cause least inconvenience to users.

• During the upgrade process, take ESD precautions.

• Boot the server. Confirm that the additional storage is recognized by the operating system. An
appropriate operating system tool, for example Disk Management in Windows, will show all disks
detected by the operating system. The tool will allow you to carry out essential disk-related activities,
such as partitioning and formatting.

• Baseline the server.

• Document the upgrade.

Increase Memory

Adding memory is a straightforward matter. Every motherboard has a specific ceiling regarding the
amount and type of RAM it can accommodate.

You need to verify from the motherboard documentation whether or not the motherboard can accept the
upgrade in terms of the memory specifications. Important specifications are speed, capacity, and type.
The types of RAM that can be used may be SDRAM, DDR, DDR2, RDRAM, or other types described in
the Memory section in chapter 1. The types cannot be mixed. Many servers use ECC RAM. If the server
allows either RAM type to be used, non-ECC RAM should not be mixed with ECC RAM.

www.transcender.com

77
CompTIA SK0-002 Study Guide

The speed of all the memory modules used on the motherboard should be identical. Although, in theory,
RAM of the same specifications have identical characteristics, in reality there are variations from
manufacturer to manufacturer. Memory modules from the same manufacturer should be used for all
memory modules in a server. Installing memory modules from different vendors in the same server may
lead to subtle and hard-to-pin-down errors.

The motherboard should also have free memory slots to accommodate the memory modules. Verify
whether the operating system can support the upgraded capacity of memory. Usually this is not an issue.
More importantly, verify whether the memory is on the HCL for the operating system because this is often
an issue. You should also verify whether a BIOS upgrade is required. The motherboard manufacturer’s
Web site will have information regarding this.

For adding or upgrading memory to a server;

• Locate the latest drivers, operating system updates and so on, prior to the installation. This ensures
that you have all the update software handy, just in case you require them.

• Review all documentation including FAQs, motherboard vendor instructions, and experiences of
others. This gives you a forewarning of what to expect during the installation.

• When the upgraded server performs a business-critical role, test and pilot the installation using a
server with matching hardware. This will help iron out issues before the real server is shut down for
the upgrade. The pilot may reveal the need to upgrade the BIOS, for example.

• Plan the installation, and schedule the downtime to cause least inconvenience to users.

• During the upgrade process, take ESD precautions.

• Boot the server. Confirm that the additional memory is recognized by the BIOS and the operating
system. Observing the POST memory check indicates whether the BIOS can detect the additional
memory. An appropriate operating system tool, for instance, the General tab of the System Properties
dialog box in Windows, will indicate the total RAM detected by the operating system. Normally, no
special actions have to be performed for the operating system to use the extra memory. In some
cases, it may be necessary to set the enhanced memory size manually in CMOS configuration.

• Baseline the server before making further changes. Some software adjustments in the operating
system, such as re-tuning the swap file size, are sometimes required. These should be carried out,
where required.

www.transcender.com

78
CompTIA SK0-002 Study Guide

Upgrading BIOS/firmware, Adapters, and Peripheral Devices


Scope
• Discuss upgrading BIOS/firmware.

• Discuss upgrading adapters and peripheral devices.

Focused Explanation
Upgrade BIOS/firmware

BIOS upgrades may be necessary before upgrading a server’s CPU, memory, and storage components.
In addition, firmware upgrades for SCSI adapters and RAID controllers may be necessary before adding
disks. Firmware usually resides on a type of user-programmable read-only memory called flash ROM.
This type of memory is non-volatile.

Upgrades for firmware are downloadable from the manufacturer’s Web site. The supplier of the firmware
generally provides detailed instructions for flashing firmware. It is best to follow these instructions when
upgrading firmware.

In most cases, the upgrade is an executable file. When the file is run, the target firmware is upgraded.
During the time that the flash ROM is being upgraded, the power to the computer must not be interrupted.
In case the power fails during the upgrade interval, then the flash ROM gets corrupted. A failed flash
upgrade results in an inoperable device, which has to be sent back to the vendor for repair. Some
flashing software offer a recovery option that may be used to recover from a failed flash attempt. These
work by saving a copy of the original BIOS contents as a disk file backup. In case of a failed flash
attempt, the flashing software offers a rollback option that restores the backup copy to the flash ROM.

Some computer systems incorporate what is called multi-BIOS. These are computers that are provided
with switch selectable redundant BIOS storage areas. The switch selects one of these as the active
BIOS. During the upgrade process, the secondary BIOS area is switched active for flashing. In case of a
failed flash attempt, the original BIOS area with its original BIOS content can be switched back to active.
This allows a perfect rollback of the flash attempt. Aftermarket add-on kits to convert normal BIOS to
multi-BIOS are also available.

Best practices for upgrading firmware are as follows:

• Review FAQs, instructions, facts, and issues available on the Internet regarding the need for the
upgrade, including precautions and warnings. Review service and maintenance logs for the
equipment to be flash upgraded to benefit from past experience of upgrades.

• Download the upgrade from the manufacturer’s Web site.

• If the equipment to be upgraded performs a business-critical role, pilot the upgrade on substitute
equipment of matching hardware specification. This will help iron out issues before the real
equipment is upgraded.

• Plan the installation and schedule the downtime to cause least inconvenience to users.

www.transcender.com

79
CompTIA SK0-002 Study Guide

• Upgrade the operating system if required by applying patches or hot fixes. Upgrade the firmware
using the manufacturer’s flash instructions. ESD precautions are not required because the upgrade is
a software activity. In all cases, the flash upgrade utility will report whether or not the upgrade has
been accomplished successfully. If the upgrade attempt is unsuccessful, attempt to roll back the
upgrade to the original state using the manufacturer’s instructions.

• Start the equipment, and observe whether the equipment is functional.

• Baseline the operation of the equipment or the system to which the equipment is attached, as
applicable.

• Document the upgrade. Documentation must identify the upgrade software in terms of name, version,
and timestamp.

Upgrade Adapters

Adapters require upgrading to enhance performance, or in certain cases, to provide compatibility for a
device that must connect to the adapter. Common adapters that are upgraded include NICs, SCSI cards,
RAID cards, and others. Bus compatibility was a major issue for adapter cards in the past. With the
dominance of PCI, bus compatibility is no longer an issue. The PCI bus has very good backward
compatibility. The bus throttles back to accommodate slower adapters. It is good practice to read the
documentation for the motherboard and the adapter to determine if there are likely to be issues in the
upgrade.

Keep in mind that hot swap PCI cards can be hot swapped only if the slot allows.

High performance adapters, such as NICs, SCSI adapters, and RAID cards, should be plugged into bus-
mastering slots. Only the first two or three slots in a hierarchical PCI bus are likely to be bus-mastering
slots. Teamed NICs should be load balanced by plugging into separate peer PCI buses, where available.

Because a RAID card is intimately connected to the array in a RAID setup, changing a RAID adapter may
make the array unreadable. This could happen because the replacement card does not support the kind
of array created by the original. One good way of ensuring compatibility is to use a RAID card of the
same make as the original. In general, for any adapter being installed, verify whether the adapter is
compatible in terms of the motherboard, bus slot, and operating system. For SCSI adapters, given the
wide variety of standards, the compatibility must be checked out carefully.

Note: Bus types and SCSI technology are discussed in the System Bus Architectures and Small
Computer System Interface and AT Attachment sections of Chapter1 respectively.

The documentation for the adapter is the best source of information regarding compatibility. Backing up
the server data and configuration before attempting the upgrade is a good practice.

www.transcender.com

80
CompTIA SK0-002 Study Guide

Best practices for upgrading adapters are as follows:

• Locate, and obtain the latest drivers, operating system updates, patches, and so on before
performing the upgrade. The drivers that are included with the adapter may not be the latest; the
newest drivers may be found on the manufacturer’s Web site.

• Review any documentation available for similar installation attempts in the past. Documentation
includes maintenance and service logs and should extend to FAQs and discussions on the
Internet. The manufacturer’s installation instructions are an important source of information for
precautions and compatibility issues.

• If the server where the adapter is being installed performs a business-critical role, pilot the
upgrade on a substitute server of matching hardware specification. This will help iron out issues
before the real equipment is upgraded.

• Plan the installation, and schedule the downtime to cause least inconvenience to users.

• Take ESD precautions during the upgrade. Upgrade the operating system by applying patches,
hot fixes, and so on, if it is required. If available, comply with manufacturer-recommended
installation procedures.

• Start the server, and confirm that the adapter has been recognized. For PnP hardware with a PnP
operating system, the recognition is automatic, and the operating system will load the required
drivers or prompt for user-supplied drivers. For a non-PnP operating system, the drivers must be
loaded as per instructions indicated in the adapter’s instructions.

• Baseline the server.

• Document the upgrade. The documentation should include the versions of the drivers installed.

Upgrade Internal and External Peripheral Devices

Peripheral devices are usually upgraded for performance reasons. Common peripherals that can be
upgraded include disk drives, backup devices, optical devices, and KVM devices. The main verification
step that needs to be carried out is the availability of the appropriate port or interface connector in the
server.

Note: Optical devices are disk drives that allow data storage. A CD-ROM drive is an example of an
optical drive that provides read-only capability. Optical drives that allow read-write capability include CD-
ROM Rewritable (CD-RW), various forms of Digital Versatile Disk (DVD) writers, and Magneto-optical
(MO) drives.

Many storage devices, including disk drives, optical drives, and backup devices plug into the SCSI bus.
The standard precautions for SCSI apply here. The number of devices on the SCSI bus should not
exceed 7 or 15 depending on the bus used. Both ends of the bus must be terminated, and no terminator
can be present in the middle of the bus. The maximum bus length for the SCSI version in use should not
be violated. HVD devices cannot be used on an LVD bus.

www.transcender.com

81
CompTIA SK0-002 Study Guide

Where the storage devices are ATA, there should be only two devices per ATA interface bus. Devices on
the same bus must be appropriately set as master or slave. Large capacity drives may require a
motherboard BIOS firmware upgrade.

Most modern devices are PnP and will be auto-detected at boot up. However, some devices may require
you to run a vendor-supplied utility for installation. The documentation accompanying the device will
indicate the appropriate process.

Be aware of the following undesired side effects:

• Connecting a slow ATA device, such as a CD-ROM, as a slave along with a fast master disk drive
slows down both devices.

• Connecting an HVD device on an LVD SCSI bus shuts down the bus.

• Connecting an SE device on an LVD SCSI bus downgrades the bus to SE operation.

• Attempting to use incorrect cables and connectors is futile. Identify the correct cable types
required by the devices being installed, and obtain them before the upgrade.

• Installing additional equipment inside a server case may alter the cooling requirements of the
case. All devices dissipate heat, and the cables used for the installation may affect airflow
dynamics within the case. Additional cooling may have to be provided in the form of an extra fan.

Where an external device must be plugged into a peripheral port, the port must be appropriately
configured. For example, a printer may require that the parallel port be configured as an enhanced
parallel port (EPP). If the port is not configured appropriately using the CMOS setup or server setup utility
(SSU), the printer may not work.

Note: SSU is a disk-based setup utility that is used instead of the CMOS setup utility.

The following are some of the best practices for upgrading peripheral devices that you should consider:

• Locate, and obtain the latest drivers, operating system updates, patches and so on before
performing the upgrade. This ensures that you have all the update software handy, just in case
you require them. The drivers that are included with the device may not be the latest; the newest
drivers may be found on the manufacturer’s Web site.

• Review any documentation available for similar installations attempted in the past.
Documentation includes maintenance and service logs and should extend to FAQs and
discussions on the Internet. The manufacturer’s installation instructions are an important source
of information for precautions and compatibility issues.

• Where peripherals must be upgraded in a server that performs a business-critical role, test and
pilot the installation using a server with matching hardware. This will help iron out issues before
the real server is shut down for the upgrade. The pilot may reveal compatibility or performance
issues, for example.

• Plan the installation, and schedule the downtime to cause least inconvenience to users.

www.transcender.com

82
CompTIA SK0-002 Study Guide

• During the upgrade process for internal peripherals, take ESD precautions. If available, comply
with manufacturer-recommended installation procedures.

• Start the server, and confirm that the peripheral has been recognized. For PnP hardware with a
PnP operating system, the recognition is automatic, and the operating system will load the
required drivers, or prompt for user supplied drivers. For a non-PnP operating system, the drivers
must be loaded as per instructions indicated in the instructions that accompany the peripheral.

• Baseline the server before making further changes.

• Document the upgrade. The documentation should include the versions of the drivers installed.

www.transcender.com

83
CompTIA SK0-002 Study Guide

Upgrading System Monitoring Agents, Service Tools, and a UPS


Scope
• Discuss upgrading system monitoring agents.

• Discuss upgrading service tools.

• Discuss upgrading a UPS.

Focused Explanation
Upgrade System Monitoring Agents

Several system monitoring agents are used in server environments. SNMP is the oldest and most widely
used. Other agents include Desktop Management Interface (DMI) and Intelligent Platform Management
Interface (IPMI).

SNMP is a way to collect information about devices and the network. SNMP is used for configuring
devices and monitoring network usage, performance, and errors. SNMP has been described in the NOS
and Service Tool Installation section of Chapter 3.

DMI is used to identify hardware and software components in networked and non-networked computers.
Computers that support DMI are called managed computers. In a network, a central computer can gather
information, configure, and manage computers. For non-networked computers, you manage a computer
locally.

For hardware and software components on a managed computer, DMI can identify the component name,
manufacturer’s name, version, serial number, and date of installation. This allows network administrators
a means of configuration control and resolution of configuration errors. DMI also facilitates upgrade
decisions.

The Desktop Management Task Force (DMTF) has declared the current version of DMI, version 2.0, as in
end-of-life state; support for DMI users and implementers ended on March 31, 2005. The common
information model (CIM) is a replacement for DMI.

IPMI is a means of tracking server components using a chip that is embedded in the server. IPMI allows
in-band monitoring of servers. IPMI is described in the Server Baseline and Management section of
Chapter 3.

Different implementations of server management software exist for different OSs. Organizational policy
has a great deal to do with the selection of specific server management software used in an organization.

www.transcender.com

84
CompTIA SK0-002 Study Guide

Best practices for upgrading software components are as follows:

• Locate, and obtain the latest drivers, operating system updates, and patches before performing
the upgrade. This ensures that you have all the update software handy, just in case you require
them. Upgrading system monitoring agents is an ongoing activity. You should actively search and
locate upgrades to system management agents whenever they become available.

• Review any documentation available for similar installation attempts in the past. Documentation
includes maintenance and service logs and should extend to FAQs and discussions on the
Internet.

• Where system monitoring must be upgraded in a server that performs a business-critical role, be
prepared to roll back the upgrade if there are any problems. The data and configuration of the
server must be backed up before the upgrade is attempted.

• Plan the installation, and schedule the downtime to cause the least inconvenience to users.

• Carry out any updates on the operating system that are required. Install the upgrade to the
software using manufacturer recommended procedures.

• Start the server, and test the system monitoring agent by running it.

• Benchmark the server, before installing any additional software.

• Document the upgrade, including the version of the software installed.

Upgrade of Service tools

Common software service tools are as follows:

• EISA configuration utility: a utility provided by the motherboard vendor for specifying the EISA
adapter being used and the slot number in which the adapter is plugged. EISA uses a crude form
of auto-configuration, which depends on a database of devices contained on a disk. The
motherboard manufacturer provides the database and a configuration utility called the EISA
configuration utility. When an EISA device is plugged into an EISA slot, the user must run the
configuration utility and select the device being installed from the list of devices in the database.
The EISA configuration utility reads the configuration details contained in the database, and
appropriately configures the device. An upgrade for this utility upgrades the database of
components that the EISA bus can auto-configure. Because EISA adapters are no longer
manufactured, upgrading the EISA utility is no longer practiced.

• Diagnostic partition: a small partition of the hard disk on the server, which is usually invisible in
normal use. The partition contains diagnostic utilities for troubleshooting configuration issues
when boot up problems occur. The utility is invoked by booting from a floppy disk or by pressing a
hot-key combination. Upgrades are available from the motherboard manufacturer’s Web site.

• SSU: a configuration utility used instead of the CMOS utility. The SSU is usually a floppy disk or
CD-ROM and must be used to boot the server. The utility is then used to adjust configuration

www.transcender.com

85
CompTIA SK0-002 Study Guide

parameters for the server. Upgrades for the SSU are available from the motherboard
manufacturer.

• RAID utility: a configuration utility used to configure RAID levels, define arrays, and so on.
Upgrades for the RAID utility are available from the RAID hardware vendor.

• External storage utility: a configuration utility for configuring external storage. Upgrades for the
external storage utility are available from the external storage hardware vendor.

• SCSI configuration utility: a firmware utility that is launched by pressing a hot-key combination on
boot up. The utility allows the SCSI adapter to be configured. Importantly, it enables loading of
the SCSI BIOS into memory at boot time. If loaded, the SCSI BIOS allows a SCSI drive to act as
a boot device.

• CMOS setup utility: a firmware utility that is launched by pressing a hot-key combination on boot
up. The CMOS setup utility allows adjustment of configuration parameters for the server. CMOS
setup is rarely upgradeable. If allowed, the upgrades are available from the motherboard
manufacturer. Server manufacturers, in general, tend to provide SSU rather than CMOS utility for
setting configuration. The SSU is a floppy or CD-ROM disk-based setup utility, in contrast to the
firmware-based CMOS setup utility: essentially they perform the same function.

Best practices for upgrading service tools are as follows:

• Locate and obtain the latest version of the service tools, operating system updates, and so on
prior to the installation. This ensures that you have all the update software handy, just in case you
require them.

• Review all documentation including FAQs, discussions, and tips on the Internet. This gives you a
forewarning of what to expect during the installation. Review service and maintenance logs for
the server to benefit from past experience of upgrades of service tools.

• When the upgraded server performs a business-critical role, test and pilot the installation using a
server with matching hardware. This will help iron out issues before the real server is shut down
for the upgrade. If piloting is not feasible, be prepared to roll back changes in case the upgrade
causes problems. Back up the server data and configuration information.

• Plan the installation, and schedule the downtime to cause the least inconvenience to users.

• Carry out any updates on the operating system if that is required. Install the software using
vendor-recommended installation steps.

• After the installation, start up the server. Run the tool to confirm that it is working correctly.

• Baseline the server before making further changes.

• Document the upgrade.

www.transcender.com

86
CompTIA SK0-002 Study Guide

UPS Upgrade

Upgrading a UPS becomes necessary when the backup time must be increased. The backup time is the
duration that the UPS is able to maintain the rated load. At the end of this duration, the UPS shuts down
and must recharge before it can provide backup power again. The duration should be increased if the
average duration of outage has increased. Additionally, as batteries age, they slowly lose their charge-
holding capacity. As a result, the duration of backup provided by the UPS tends to drop. You may also
need to upgrade a UPS if the maximum load, in other words, the maximum power that the UPS will
supply, must be increased. If at any instant, the power drain or load increases above the maximum UPS
power rating, the UPS will overload and shut down. The load must be backed off before the UPS restarts.
Thus, if additional computers need to be powered by the UPS, then the increased load requirement may
cross the maximum power rating for the UPS. These scenarios call for an UPS upgrade.

The electrical power consumed by any equipment is expressed in volt-amperes (VA), where 1 VA = 1 volt
(V) x 1 ampere (A). Any server that draws 3 A from a 110 V supply is drawing 110 x 3 = 330 VA. The unit
watt (W) is also used to express power, where 1 W = 1 A x 1 V. Although watts and volt-amperes appear
equivalent, VA is the more accurate measure of load due to technical reasons. The actual power drain on
an UPS is calculated by summing up the individual power requirements of equipment connected to the
UPS. If five servers, each drawing 350 VA of power are connected to an UPS, then the load on the UPS
is 5 x 350 = 1750 VA or 1.75 KVA. Keep in mind that 1 KVA = 1000 VA. This is summarized below:

Total load on UPS (in W) = sum of individual loads imposed by all equipment connected to UPS (in W)

Batteries are rated in ampere-hours (Ah). A 12 V, 20 Ah battery can supply 20 A for 1 hour before it is
exhausted and requires recharging. The backup time of an UPS is directly related to the battery rating. A
12 V, 20 Ah battery has 12 x 20 = 240 VAh (volt-ampere hours) of energy. A 110 V UPS supplying 2 A, or
in other words a 220 VA UPS, will exhaust the battery in 240 / 220 = 1.09 hours. If two hours of backup is
required, then the battery capacity should be doubled to 12 V, 40 Ah.

Consider an example that clarifies the calculations:

Your organization uses five servers rated at 370 VA each, and 15 desktop computers each drawing 220
VA. Your organization also uses miscellaneous equipment such as external RAID, routers, hubs and
switches that collectively consume 1250 VA. You have been asked to specify an UPS that will keep your
equipment running for five hours in the event of a power outage.

Step 1: Calculate the total load in your organization

5 servers each drawing 370 VA = 5 x 370 VA = 1850 VA

15 desktops each drawing 220 VA = 15 x 220 VA = 3300 VA

Misc. equipment drawing 1250 VA = 1250 VA

Total load = 6400 VA

www.transcender.com

87
CompTIA SK0-002 Study Guide

Step 2: Calculate the Power Rating for your UPS

Total connected load (from last step) = 6400 VA

Required UPS rating = 6400 VA

In real life situations, you would add a margin of say 25% to 50% to this rating to take care of
transient overloads and other emergencies.

Step 3: Calculate the battery capacity for the UPS

The total current to be supplied by the battery (A)

= total load (VA) / mains voltage (V)

= 6400 VA / 110 V = 58.18 A

Battery capacity (Ah) = backup time (h) x current (A)

= 5 h x 58.18 A = 290.9 Ah (say 291 Ah)

Batteries are available in standard Ah capacities. You will need to select an available battery with Ah
capacity equal to or higher than 291 Ah.

Note: The voltage of the battery is specified by the UPS and cannot change. Also, the calculation
indicated is the ideal case. In real life, there are losses in the UPS that waste a part of the battery rating.
While calculating load requirements, provide safety margins for transient load requirements and growth.

In the example above, the calculation indicates a 291 Ah battery. Before upgrading the battery to 291 Ah,
consult the UPS documentation to determine whether the battery charging circuits of the UPS can cope
with a 291 Ah battery. The UPS will indicate an upper limit to the size of battery, in Ah, that it can support.
In case the UPS cannot support the charging currents required by the upgraded battery, the UPS must be
upgraded to a larger one.

Some UPSs support hot swap battery replacements. In that case, the UPS does not have to be shut
down to add or change a battery. The UPS documentation will indicate whether hot swap replacement is
supported. You should always dispose of unwanted batteries in an environmentally responsible manner.

UPSs also contain firmware. The firmware is used to send status signals to attached equipment. In case
of an impending shutdown caused by exhausted batteries, the UPS signals servers accordingly. Servers
then initiate a graceful shutdown, which prevents data corruption. UPS manufacturers also provide
software that is loaded on the server. This software responds to signals sent by the UPS firmware, and
shuts down the server. The software is referred to by various names including power management
software, UPS monitoring software, and shutdown manager. On a regular basis, UPS manufacturers
provide upgrades to UPS firmware and the power management software.

www.transcender.com

88
CompTIA SK0-002 Study Guide

Note: The cable that carries signals from an UPS to a server is sometimes referred to as a smart cable.
The cable usually plugs into a serial port on the server.

When upgrading a UPS, consider the physical requirements of the UPS. Every UPS requires a specific
amount of real estate, whether on the floor or in the rack. A UPS upgrade may occupy more space than
the equipment it is replacing. Ensure that adequate space is available.

Best practices for upgrading a UPS are as follows:

• Locate and obtain the latest version of the UPS firmware and the power management software,
operating system updates, and so on prior to the installation. This ensures that you have all the
update software handy, just in case you require them.

• Review all documentation including FAQs, discussions, and tips on the Internet. This gives you a
forewarning of what to expect during the installation. Review service and maintenance logs for
the server to benefit from past experience of upgrades of a UPS.

• When the equipment to be powered by the upgraded UPS performs a business-critical role, test
and pilot the installation using matching loads deployed in non-critical roles. This will help iron out
issues before the live equipment is shut down for the upgrade.

• Back up the data and configuration of all equipment to be powered by the upgraded UPS.

• Plan the installation, and schedule the downtime to cause the least inconvenience to users. A
UPS shutdown is a major event in an organization. All equipment connected to the UPS needs to
be shut down along with the UPS.

• After the installation, start up the UPS and the equipment served by the UPS. Observe status
lights on the UPS to ensure that the UPS is working correctly. Load the power management
software on servers as recommended in the installation instructions for the UPS.

• A baseline is not relevant for an UPS upgrade.

• Document the upgrade.

www.transcender.com

89
CompTIA SK0-002 Study Guide

Review Checklist: Upgrading Server Components


Discuss backups as a pre-upgrade best practice.

Discuss adding processors.

Discuss adding hard drives.

Discuss adding memory.

Discuss upgrading BIOS/firmware.

Discuss upgrading adapters and peripheral devices.

Discuss upgrading system monitoring agents.

Discuss upgrading service tools.

Discuss upgrading UPS.

www.transcender.com

90
CompTIA SK0-002 Study Guide

Proactive Maintenance

www.transcender.com

91
CompTIA SK0-002 Study Guide

Proactive Maintenance
Scope
• Discuss types of backups.

• Discuss requirements of a baseline for comparing performance.

• Discuss physical housekeeping tasks.

• Discuss SNMP thresholds.

• Discuss a Server Management plan.

Focused Explanation
Proactive maintenance is carried out to maintain and improve network efficiency. It is distinct from
breakdown maintenance, which is carried out in response to breakdown. Proactive maintenance includes
the following main tasks:

• Perform backups tol allow data recovery in the event of any kind of data loss.

• Perform baselines tol record a performance yardstick against which future improvements and
degradations can be compared.

• Perform physical housekeeping to aid human activity in the server environment and also protect
servers.

• Set SNMP thresholds as a monitoring activity. Setting thresholds allows SNMP to automatically
take action when certain events occur.

• Monitor, maintain, and follow a server management plan, which is primarily a process driven
maintenance activity that includes monitoring and documenting activities as well.

Backup

All networks require some kind of backup and restore practices to ensure that data is safeguarded. Data
loss can occur due to hardware and software failure and to human error. Data backups prevent data
loss incidents from turning into major financial losses. Data backups are mostly made on tape media
because these represent the cheapest mass storage media available.

Backups can be of five types:

• Full : backs up data regardless of the state of the archive bit associated with each file. A full
backup resets the archive bit for each file after backup. This requires the most time to carry out.
Restorations are straightforward.

• Differentia : backs up files that have changed since the last full backup . The files selected are
those with the archive bit set. Differential backup does not reset the archive bit after backup. A

www.transcender.com

92
CompTIA SK0-002 Study Guide

differential backup set includes the most recent full backup and the most recent differential
backup. While restoring, the most recent full backup is restored, followed by the most recent
differential backup. Intermediate differential backups are not restored. Backup time for a
differential backup method is faster than that for a full backup but restoration takes longer.

• Incremental backs up files that have changed since the last full backup or the last incremental
backup . The selected files are those with the archive bit set. Incremental backup resets the
archive bit after backup. An incremental backup set includes the most recent full backup and all
the subsequent incremental backups. While restoring, the most recent full backup is restored,
followed by all the subsequent incremental backups. The incremental backup tapes must be
restored in chronological order, starting with the oldest tape. Backup time is the fastest in this
method, but restoration time is the longest.

• Copy is a Microsoft term for a backup operation that will backup all selected files but not set
archive bits.

• Daily is a Microsoft term for a backup operation that will back up all files changed during the
current day. Daily backups do not set the archive bit.

Various backup strategies are used to ensure that data can be restored in such a way that the state
of the servers is restored as accurately as possible.

In small organizations, a full backup may be taken every evening, whereas in a large organization, a
full backup may be taken weekly. The full backups are supplemented by daily incremental or
differential backups. It is important to periodically verify the backups through a restore and check
process.

www.transcender.com

93
CompTIA SK0-002 Study Guide

Table 5-1 indicates popular strategies.

Backup Strategy Advantages Disadvantages


Full backups every day This is the most reliable. All backups This requires the longest time
contain all data. for the backup operation.
Only the full backup is needed for
restoration.
Full backup one day a This requires less time for the restore This requires more time for the
week; differential operation than full/incremental solution backup operation overall , but
backups carried out but more time than full solution. can reduce the amount
other days The full backup and the most recent needed in a day because
differential backup are needed for some days will only need
restoration. differential backups.
Differential backups are
shorter than full backups.
Full backup one day a This requires less time for the backup This requires more time for the
week; incremental operation than full/differential solution restore operation.
backups carried out but more time than full solution. The full backup and all
other days incremental backups since the
last full backup are needed for
restoration.

Table 5-1: Backup Strategies

Baselines

Baselines are performed to identify performance issues. A baseline is indicative data of normal
performance. After you have recorded the normal performance, you can track the server’s performance
over time.

The following can be tracked for a baseline:

• Processor: The CPU usage is prime data for a baseline.


• Memory: The memory usage is useful data for a baseline
• Disk: The disk read and write intervals and usage are important data for a baseline.
• Network: The network performance is useful in a baseline.

Establish baselines during hours of peak activity, as well as during periods of non-peak activity. About a
week of data is required to establish a baseline. The trends revealed by baselines will indicate where
performance needs to improve.

The baselines are stored along with server maintenance and service logs. This allows the baseline data
to be scrutinized whenever server logs and maintenance logs are consulted for maintenance and upgrade
considerations.

www.transcender.com

94
CompTIA SK0-002 Study Guide

Adjust SNMP Thresholds

Thresholds result in alerts to the network administrator. SNMP thresholds can be adjusted on an SNMP
agent to notify the SNMP network management station (NMS) whenever some network activity reaches a
particular level. For example, SNMP thresholds can be set to trigger an SNMP trap when the CPU activity
of a router crosses a certain value, indicating impending overload of the router. A threshold can even be
set to indicate an excessive number of IP datagrams crossing a particular gateway. The trap will send a
message to the NMS, which can generate an alert for the administrator.

Used with care, SNMP thresholds can provide a way of monitoring and controlling the network in an
automated fashion.

You will set a threshold whenever you wish to be informed of an abnormal level of activity in your
network. The level that is considered abnormal is decided by you and could be based on the historical
level of activity. An example of this is given below:

You have installed a managed switch on a newly created section of your network. You wish to monitor
network traffic and be alerted whenever network performance is significantly degraded by excessive
levels of certain activities. The activities that are important to you are collisions, broadcasts, and total
errors.

You will set SNMP thresholds to generate traps whenever the levels for any of these traffic parameters
cross a certain value. Because the switch is newly installed, there is no historical level of activity that you
can use as a benchmark. You would thus start with default or even best guess values for levels and refine
them over time. You can sometimes determine the value thresholds by observing a similar piece of
equipment’s values. Values you use could be extremely relevant for a particular network under certain
circumstances and be completely irrelevant for other networks and circumstances.

In the previous example, you may find that you receive a large number of alerts for total errors. However,
your investigations may reveal that the total number of errors does not degrade network performance in
any discernable manner. In that case, you would probably increase, step-by-step, the threshold for total
errors. The correct level is established when you are alerted only if the total errors significantly degrade
network performance. This level will be identified through trial and error. For an established network,
historical data from baselines can identify the level.

Keep in mind that the trap, beyond serving to alert the administrator, can also be set to trigger an
automated response from the SNMP NMS. In the previous example, a switch port can be turned off when
excessive collisions are notified through a trap.

The exact manner in which the thresholds are specified varies and is dependent on the SNMP NMS
software used.

Physical Housekeeping

There are quite a few automatic controls that allow you to control the server environment. For example,
the temperature and humidity is automatically maintained through air conditioning. However, some

www.transcender.com

95
CompTIA SK0-002 Study Guide

amount of physical housekeeping is also required. A clean and well-maintained environment will promote
attention to detail and will reduce stress in the workplace.

Regular cleaning schedules should be set up and maintained. Dust build up within a server room is
harmful to equipment. Cables should be routed neatly along walls. The use of cable management
accessories will allow neat routing of cables. Storage must be provided for tools and spares.

Server Management Plan

Server management includes the tasks of system management, fault management, data collection, and
change management. These tasks have been described in the Server Baseline and Management section
of Chapter 3.

Most organizations have a server management plan in place. Within the context of this plan, monitoring
servers and other equipment on the network for errors is a part of data collection. In addition to monitoring
the equipment, the agents on managed devices can be configured to send out traps whenever certain
errors occur. The traps work even in the absence of polling, which is the method used by the NMS to
examine the MIB on the agent.

Outright failure of any device on a network is a situation that you should be immediately notified of as a
part of fault management. You should also be notified if there is a component failure within redundant or
non-redundant equipment that does not result in immediate failure of the overall equipment.

Redundant hardware, which contains multiple units of the same hardware, is designed to continue
working even if some of that hardware fails. For instance, consider a RAID-5 setup using a 3-disk array.
In the event of failure of one of the disks, the RAID continues to work, but at a reduced performance level.
However, in this state, the RAID array offers no redundancy, and an additional failure will result in total
data loss. Under these circumstances, it is important to replace the failed drive at the earliest. This holds
true for most other types of redundant hardware as well.

Using SNMP and configuring traps for the failure of a component on a redundant system is a good failure
notification method. The NMS should alert the administrator about the failed component and the fault-
management section of the server management plan should allow classifying these failures as high-
priority events so that the components can be replaced at the earliest opportunity.

A similar system should be implemented for essential components of servers that are not necessarily
redundant but whose failure does not cause immediate failure of the server. An example of such a
component is the CPU cooling fan that keeps the processor cool. In the event of a fan failure, the
computer may keep working, but the fan must be replaced at the earliest opportunity. The server
management plan should allow classifying such events as high priority so that these can be replaced or
repaired.

www.transcender.com

96
CompTIA SK0-002 Study Guide

Review Checklist: Proactive Maintenance


Discuss types of backups.

Discuss requirements of a baseline for comparing performance.

Discuss SNMP thresholds.

Discuss physical housekeeping tasks.

Discuss server management and change plans.

www.transcender.com

97
CompTIA SK0-002 Study Guide

Server Environment

www.transcender.com

98
CompTIA SK0-002 Study Guide

Server Environment

Scope
• Discuss physical security issues of servers.

• Discuss environmental parameters, such as temperature and humidity, ESD, power issues, fire
suppression and floods.

Focused Explanation
The physical security of the server room is an important part of the server environment. The server room
and its contents represent a substantial investment, but the server is something that cannot be kept
locked up all the time. At the same time, the server room and servers within it are vulnerable to threats of
tampering and theft. Tampering with servers can be done with or without malicious intent. Casual users,
experimenting with a server that is left accessible, represent as big a threat as a user with malicious
intent.

Limit access to the server room. You should also have restricted access to backup tapes. The restrictions
should be governed by an organizational policy and enforced by a physical deterrent in the form of
access control hardware. The control hardware can consist of a lock on the main entrance door of the
server room with keys available to the authorized personnel only. You can also use a swipe card access
control setup to limit the access to the server room. The advantage of a swipe card system is that any
access to the server room can be logged.

Sever room access should preferably be controlled through biometric authentication because biometric
authentication is the best form of security. Swipe cards is the next best access control measure, followed
by keys. Access to the server room should be restricted to only those who have a role to play within the
server room. This would include all personnel responsible for administering and maintaining servers.
Keys, swipe cards, and so on, should only be issued to such personnel.

Biometric authentication methods scan a specific biological trait, and identify details that are unique to an
individual. Biometric authentication includes recognition of voiceprints, fingerprints, retinal scans, facial
scans, and so on. Each method employs a specific sensor that reads the biological characteristic in
question. The characteristic is compared against a stored copy contained in a database. If a match is
found, the person is successfully identified. The hardware and software for fingerprint recognition is the
most inexpensive, and this is the most common biometric authentication method used.

Inside the server room, individual racks should be locked. This will reduce chances of theft, and will also
prevent access to the servers. Rack systems can be secured through a variety of lockable doors. Full size
doors are the same height as the rack, and are available in glass fronted or sheet steel varieties. Sheet
steel doors offer greater security, but glass fronted doors allow you to observe status lights of rack
mounted equipment. Smaller doors are also available, with heights specified in U to match specific
servers. These allow you to block off part of the rack. Small doors are available in both Plexiglas and
sheet steel variants. Both the front and back of a rack can be provided with lockable doors. Cable locks
and cable lock alarms are used to secure cables on notebook computers and are not used in racks.

www.transcender.com

99
CompTIA SK0-002 Study Guide

Backup tapes that are stored onsite should be protected from environmental threats. The threats include
fire, magnetic fields, and condensation. A fire proof safe represents a secure storage area, and should be
used for storing backup tapes. Offsite storage of tapes has similar requirements. Here again, a fireproof
safe provides secure storage. Because a safe is lockable, it prevents unauthorized personnel from
physically accessing backup tapes.

Backup tapes should be disposed carefully. Simply erasing data from tapes does not ensure complete
destruction of data from tape. If you dispose of backup tapes after simply erasing them, there is a
possibility that your data can be extracted and used in an unauthorized manner. Sanitizing ensures that
data is removed in a permanent and irrecoverable manner from backup tapes, and it involves several
approaches:

o Overwrite tapes with non-sensitive data, before erasure and disposal. This makes recovery of
sensitive data more difficult. This method is however not foolproof because sensitive data may
still be recovered from tapes that have been overwritten once.

o Degauss tapes using degaussing equipment. A special electromagnet that creates a strong
magnetic field is used to destroy sensitive data on tapes. The degaussing equipment must be
used as per manufacturer recommendations if you wish to ensure that data is adequately erased.

o Overwrite tapes multiple times, before erasure and disposal. Software applications are used to
overwrite tapes with binary 1s, and then with binary 0s, and then with a random stream of 0s and
1s. This method is more effective in protecting sensitive data as compared to overwriting tapes
just once or degaussing tapes.

o Physically destroy tapes by chopping up into small bits. This is a very effective sanitizing method,
provided that the tape is cut up finely enough. Incineration serves the same purpose, but is not
recommended due to environmental concerns. Overwriting tapes multiple times, followed by
physical chopping up of the tapes represents a comprehensive sanitizing process.

All security measures within a server room should be documented. Regular audit and review of security
measures within a server room is a good practice.

Environmental Parameters
Environmental parameters that must be maintained to ensure that servers operate efficiently include
temperature, humidity, electrostatic discharge (ESD), and power issues. In addition, there are other
environment-associated considerations, such as fire and flood considerations, that affect the server room
setup.

Temperature

Electronic components are apt to malfunction or fail at high temperatures. Generally, the server room
temperature should not go outside the 10-280C (50-820F) band. These temperatures, however, are the
boundary values beyond which your server may fail. Normally you would attempt to maintain the
temperatures in the 18-220C (65-720F) range. To keep servers running efficiently in an error-free manner,
temperature control measures are essential.

www.transcender.com

100
CompTIA SK0-002 Study Guide

There are two approaches to temperature control:

o Cool the server. Most servers are provided with fans that cool the server case. Most
motherboards are provided with sensors that can monitor internal temperatures and fan speeds.
If high temperatures are detected, software on the server will either flash a warning or shut down
the server. Servers must be placed so that the air intake of the fans does not pull in air from a
locally heated air stream, such as the heated exhaust air of another server. Within a server room,
there are hot spots and cold spots. Air intakes must be located in cold spots.

o Cool the room. The server room is invariably air conditioned to control the temperature and
humidity. The allowable range of temperatures for a server room is 50-820F. In most cases, the
temperature in the server room is actually maintained between 65-720F. It is important to maintain
temperatures along the aisles between the racks in a server room. Unless cool air from air
conditioning vents is routed appropriately, temperatures may rise as hot air exhausted from
servers builds up along aisles. Air conditioning within a facility is often turned down to conserve
power at night and during the weekends and holidays. Extremely low air conditioning could
severely impact the server room environment. Server rooms are usually provided with their own
environmental controls, which are set to ensure that server room temperatures do not go outside
safe limits.

Humidity

The recommended range of humidity in a server room is 45-50%. This level of humidity is appropriate
even in the face of temperature fluctuations. If the humidity falls too low, there is a danger of static
discharges occurring. If the humidity rises too high, there is danger of corrosion from condensation.

ESD

The server room environment should be controlled to minimize ESD. This will help protect expensive
equipment from ESD damage. For example, the humidity should be controlled.

Carpeting on the floor, especially synthetic carpeting, raises ESD. To prevent this from happening, bare
floors or special static-controlled carpets could be used. Static-controlled carpets use conductive fibers
and natural yarns to provide a static-free floor covering.

Antistatic sprays should be regularly used in server rooms to control ESD.

Power Issues

Conditioned power is important for servers. Defects in AC power include power surges, sags outages,
and brownouts.

A sag is a fall in the supply voltage for a small duration of time whereas a surge is a rise in the supply
voltage for a short duration. A brownout is a sustained low voltage condition in incoming electrical power.
The built-in power supply unit within a server cannot filter out these kinds of defects, and the resultant
effect on the server may be loss of data or an unexpected reboot.

www.transcender.com

101
CompTIA SK0-002 Study Guide

All server installations should use UPSs to supply power in the event of power failure. UPSs have power-
conditioning filters built-in. These filters smooth out long and short duration defects , including sags and
surges. If the voltage stays low for an extended duration, the UPS supplies power from batteries until the
situation improves.

UPS Types

A UPS works by storing power in rechargeable batteries. All UPSs contains a component called an
inverter. During a power outage, the low voltage direct current (DC) supply from the batteries is converted
to 110 V AC by the inverter, and supplied to computers.

Different types of UPSs are available:

o Offline is the most inexpensive type. When the AC supply is available, it is routed directly to the
output of the UPS, and the inverter stays off. In this case, the batteries keep charging. During a
power outage, the inverter supplies AC power to the output of the UPS. The offline inverter does
not provide very good regulation of utility supplied power, although the voltage regulation is good
when the inverter is running. When faced with frequent sags, the inverter tends to start up, and as
a result, the offline UPS can quickly exhaust its batteries.

The defining characteristic of an offline UPS is that the inverter stays off whenever incoming AC
power is available. An automatic change over switch routes incoming AC power to the output of
the UPS during the periods when power is available. During a power outage, the change over
switch routes power from the inverter to the output of the UPS.

o Line-interactive is similar to an off-line UPS but is able to regulate incoming utility-supplied power.
This makes the line-interactive UPS superior in situations where there are frequent sags in
voltage. Additionally, the line-interactive UPS is more reliable and more expensive than an off-line
UPS.

The defining characteristic of most line-interactive UPSs is that the inverter always stays
connected to the output. Under normal conditions, an automatic change over switch routes
incoming AC power to the output of the UPS using the inverter. Simultaneously, a section of the
circuitry charges up the battery. During a power outage, the charging current to the battery is
stopped, and battery power feeds the output using the inverter. Because the inverter is always
on, and the voltage regulation features of the inverter are always available, the line-interactive
UPS can thus condition incoming AC power in a manner that the offline UPS cannot.

o The Online UPS is more expensive than the line-interactive UPS. The inverter of the on-line UPS
always stays on. This kind of UPS is the most reliable and provides the best power conditioning.

The defining characteristic of an online UPS, a “double conversion online UPS”, is that incoming
AC power is converted to DC, and then routed to the output using the battery and the inverter.
When incoming AC power is available, part of the converted DC power charges the battery, and
the remaining part of the converted DC power is converted to AC by the inverter and routed to the
output of the UPS. Under these circumstances, the battery normally stays in a fully charged
condition, and the inverter works continuously.

www.transcender.com

102
CompTIA SK0-002 Study Guide

During outages, the inverter converts DC power from the battery to AC and supplies the output.

In addition to the above, a double conversion UPS also provides an alternate route for incoming
AC power. Here, the incoming AC feeds the output directly through an automatic change over
switch. The change over switch switches to the alternate route only when the inverter develops a
fault.

All the power-conditioning features of the UPS are always available. The design of most online
UPSs provides even better power conditioning than the line-interactive UPS.

The different types of UPSs cannot be distinguished externally. The label on the UPS may indicate the
type, whether offline, line-interactive, or online. The most reliable source for the type specification of an
UPS is the documentation that accompanies the UPS.

Table 6-1 provides a comparison of the UPS types.

Typical Effectiveness in
Available Load Conditioning incoming Cost of UPS Inverter
Type of UPS rating AC Voltage UPS Efficiency Always ON?

Standby 0 - 0.5 Low Low Very High No

Line Interactive 0.5 - 5 Medium Medium Very High Most designs

Medium to
OnLine 5 - 5000 High Low Yes
high

Table 6-1: Characteristics of UPS Types

Given a particular application, the selection criteria for UPS type depends on load ratings and economics.
All types of UPSs are not available in all load ratings. In addition, for a specific load rating, the online UPS
tends to be the most expensive, while the standby UPS tends to be the least expensive.

Backup Generators

A UPS can supply power from batteries for only a finite duration of time. To prevent the UPS batteries
from running down completely in the face of an extended power outage, a backup generator is frequently
used. A backup generator is usually a diesel generation set that starts automatically when a power
outage lasts longer than a fixed threshold interval. The threshold interval is different for different
organizations, and is set so that the UPS batteries do not get significantly drained before the generator
starts up. Some backup generators also require a manual start.

www.transcender.com

103
CompTIA SK0-002 Study Guide

Fire suppression

All server installations require some form of fire detection and suppression system. Commercial smoke
and fire detectors are adequate for the detection part. For fire suppression, various methods exist:

o Water-based is a conventional and trusted fire suppression method. When fire is detected,
sprinklers are turned on. Water-based systems have two shortcomings. Firstly, there is a danger
of short circuits because there are live electrical connections present. Secondly, the mess created
by water-based systems requires time to clean up. However, in some locations, such as
California, a water-based suppression system must be provided with alternative systems being
optional backups.

o Halon-based uses a gas to suppress the fire. Halon is ozone-depleting, and is declining in use as
a fire suppressant.

o FM-200-based uses a non-ozone-depleting gas, and is popular as a fire suppressant. It is more


expensive that a halon-based solution.

o CO2-based is a very effective solution that blankets fire with carbon dioxide gas, thus putting out
the fire. However, it is also potentially lethal to anyone in the room where it may be triggered.
CO2 systems need to be disabled whenever you enter rooms equipped with CO2 suppression
systems. They can be enabled for use again when no personnel are in the room.

In an emergency situation where the gas system is used, it is a one-time event. If the gas system
does not suppress the fire, it cannot be initialized a second time. For this reason, many
companies also incorporate a water system as a backup in case the gas system fails to suppress
the fire.

Gas suppression systems must be re-initialized once they are used. If the gas system ever
initializes, all the gas in the system is discharged, meaning the only way to use the system again
is to re-initialize or re-charge it.

Floods

While locating a server room, it is worthwhile to explore the possibility of flood damage. In areas that pose
a potential for flood damage, it may be better to locate the server room on a higher floor. A simple
relocation may save a lot of expensive hardware and recovery effort, in case a flood does occur.

www.transcender.com

104
CompTIA SK0-002 Study Guide

Review Checklist: Server Environment


Discuss physical security issues of servers.

Discuss environmental parameters, such as temperature, humidity, ESD, power issues, fire
suppression and floods.

www.transcender.com

105
CompTIA SK0-002 Study Guide

Troubleshooting and Problem


Determination

www.transcender.com

106
CompTIA SK0-002 Study Guide

Troubleshooting and Problem Determination


Scope
• Discuss problem isolation.

• Discuss use of hardware and software tools for diagnostics.

• Discuss identification of performance bottlenecks.

• Discuss processor affinity.

Focused Explanation
Performing Problem Determination

Finding a cause for a server or network-related problem could be a complex task unless you break the
process down into manageable steps. After you have determined the cause and isolated the problem,
you can find a solution for the problem.

Problem determination is the process of finding a cause for a problem. This involves confirming that a
suspected component or subcomponent is the real cause of the problem. For example, a user reports
that the network is not working. You can browse the network to confirm whether or not it is working by
double-clicking the My Computer Places icon on your server console. If you are able to browse the
network, you need to determine the exact cause of the user’s problem. In this case, the problem could be
that the user is unable to log on at the login stage of starting the computer. It could also be that the user is
unable to log on because the Caps Lock key was pressed while entering the password, which makes the
password incorrect. The real problem is quite different from what was reported.

The use of questioning techniques can prove valuable in determining the problem. You can ask the user
to describe the sequence of actions that were performed just before the problem occurred. This will reveal
the symptoms, rather than the user’s interpretation of the problem.

If you are observing the problem first hand, use your senses to hone in on the problem. There may be
indications that will help you in problem isolation. For example, check for any smoke, unusual smell, or
high temperature. Are any LED indicators lit up differently, perhaps blinking red rather than showing
steady green? Is the display showing some unusual status information? You should also check for any
cables that are not plugged in correctly.

You should determine whether the problem is caused by software or hardware malfunction. Software
malfunctions are normally rectified by reconfiguring the component and reloading the drivers. Hardware
errors are usually addressed by substitution of the component.

After isolating the problem, identify a person who is responsible for sorting out the problem. Many
organizations have different people who are responsible for solving different problems. If you are that
person who is responsible for the problem, then proceed to eliminate different components, one by one,
as the cause of the problem. For example, if you identify the problem as an unstable display, then
eliminate the video drivers, the monitor, and then the video card, as sources for the problem. You can do

www.transcender.com

107
CompTIA SK0-002 Study Guide

this by reloading the drivers and observing the display. If the problem persists, then the problem is
probably not due to drivers. Substitute the monitor with a new one, and then observe the display. If the
display is still unstable, then the original monitor was obviously not at fault. You can then proceed to
substitute the video card with a new one.

For hardware errors, remove one component at a time, and observe after each change. In the previous
example, do not change both the monitor and the video card in a single step. If the observed problem
symptom goes away on replacement of the component, then the problem is solved.

Troubleshooting Considerations
There are various software tools that can help you in problem determination. If a problem does exist, the
tools also help you to identify the component that is responsible for the problem.

After you have identified the problem, you must address the problem. If the problem is caused by
malfunctioning of the hardware, then the component must be replaced. The replacement is carried out at
the level of a field replaceable unit (FRU). An FRU is the unit of replacement in an organization. Repairs
of the FRU are not attempted at the organization. Rather the FRU is replaced and can be sent to a
competent repair service.

If the problem is due to software malfunction, then the software must be updated or patches must be
installed. Failing this, the software must be replaced. If the problem is due to configuration error, then that
must be corrected.

For troubleshooting any of these cases, documentation, such as maintenance logs and service logs, is
very important. Many reported problems may have also been reported in the past. The solutions that may
have worked for these problems in the past are recorded in maintenance logs and service logs. The logs
represent a quick solution that you can adopt with confidence, knowing that they have worked in the past.

Server-generated errors can often point you in the right direction in a troubleshooting scenario. Server-
generated errors are often logged in error logs. A read through of these can allow you to concentrate your
troubleshooting efforts in the right direction. Also important are change logs that indicate changes made
to hardware and software, including configuration. If a problem is due to a change recorded in the change
log, then it is easy to reverse the change.

Vendors of software and hardware also publish frequently asked questions (FAQs) on troubleshooting
and provide e-mail or Web support for their products. Although e-mail or Web support may involve fees,
FAQs, patches, updates, and bug-fix releases are generally free. It is a good practice to be familiar with
the Web sites of vendors of hardware or software used in your organization. It is also important to gather
all the documentation relating to hardware and software used in your organization. This will include paper
documentation, CD-ROMs, and any other documentation available.

www.transcender.com

108
CompTIA SK0-002 Study Guide

There are two general situations in troubleshooting where you could call for help:

o If the hardware or software vendors maintain help desks, call and ask for help. Usually help desks
are operated by professionals who can speedily address problems relating to their own products.

o Call for service if you have a service contract covering the kind of problem you are facing. If you
are paying for support, trying to solve it yourself is not really appropriate.

Tools

The software tools that are commonly used in context of servers include tracert, ping, ipconfig,
and telnet. Additionally, other tools, such as format, FDISK, defrag, and chkdsk are used for
maintenance of disks.

The tracert Command

The tracert command traces the route to a destination network or a computer. If there is a problem
reaching the destination, tracert locates the hop where the problem has occurred. The tracert
command can trace up to 30 hops.

Note: A hop is a segment between routers traversed by a packet while attempting to reach its destination.

The syntax for the tracert command is as follows:

tracert [-d] [-h MaximumHops] [-j HostList] [-w Timeout] [TargetName]

For example, if you wish to trace the route from your computer to www.kaplanit.com, you can run the
tracert command. Figure 7-1 shows typical output for the command.

www.transcender.com

109
CompTIA SK0-002 Study Guide

Figure 7-1: Output for the tracert Command

Tracert is a Windows command. Unix computers provide the traceroute command instead. The
output of the traceroute command is shown in Figure 7-2.

Figure 7-2: Output for the traceroute Command

www.transcender.com

110
CompTIA SK0-002 Study Guide

The command for Netware computers is IPTrace. Figure 7-3 shows the output of the IPTrace
command.

Figure 7-3: Output for the IPtrace Command

The ping Command

The ping command uses ICMP Echo Request messages to verify the IP-level connectivity of one
computer to another computer. The ping utility is primarily a troubleshooting tool to verify the connectivity
of a computer with the network. When the ping command is executed, the target computer on the
network responds by returning receipt messages. The receipt messages confirm that the targeted
computer is available on the network. The computer executing the ping command displays a status line
on the screen when it receives a response.

The ping command has a number of parameters that can be used to verify the connectivity of a
computer. The syntax of the ping command is as follows:
ping [-t] [-a] [-n Count] [-l Size] [-f] [-i TTL] [-v TOS] [-r Count] [-s
Count] [{-j HostList | -k HostList}] [-w Timeout] [TargetName]

The ping command can test the connectivity of a computer using the computer name or the IP address
of the computer. Figure 7-4 shows a typical screen output on pinging a computer name.

www.transcender.com

111
CompTIA SK0-002 Study Guide

Figure 7-4: Output for the ping command

There may be a situation where you are using the IP address, but you also need to know the host name
of a computer. In this situation, you can use the –a parameter with the ping command to resolve the
computer name from the IP address.

For example, if you ping using the IP address with the –a parameter, the ping utility pings the host using
IP address. The command is as follows:

Ping –a 199.106.238.242 -- Returns the ping receipts.

Figure 7-5 shows the results of ping –a.

Figure 7-5: Output of the ping-a Command

www.transcender.com

112
CompTIA SK0-002 Study Guide

If you ping a computer and the computer fails to respond, an error message is displayed. The error
message can differ, depending upon the situation. For example, if the computer you are trying to ping is
outside the local network, and your gateway prevents relaying of the ICMP packets, you will get a
destination unreachable error as shown below.

Pinging kaplanit.com [199.106.238.242] with 32 bytes of data:


Reply from 172.17.60.1: Destination net unreachable
Reply from 172.17.60.1: Destination net unreachable
Reply from 172.17.60.1: Destination net unreachable
Reply from 172.17.60.1: Destination net unreachable

The IP address 172.17.60.1 is the IP address of the gateway that blocks the ICMP packets from moving
out of the network.

If you ping an IP address that is not being used, the error “Request timed out” appears. This error
message will also appear if the computer using the IP address is turned off or offline.

In another situation, you might attempt to ping a computer by a host name that does not exist. In this
case, ping returns the error message “Unknown host <computer name>.”

You can use the ping command continuously, if you suspect that there is an intermediate problem. For
example, a computer may be responding on the network but may be giving intermediate timeouts. You
can use the –t parameter to ping continuously. The command is Ping <computer name> -t.

The target computer is pinged continuously until you press Ctrl+C to terminate the command. You can
also increase the timeout interval between each response. The command to increase the timeout interval
is as follows:

Ping <computer name> - w <interval time>

The ipconfig Command

The ipconfig command displays the current TCP/IP configuration for all network interfaces on the
computer. When you use the ipconfig command with the /all switch, the complete TCP/IP configuration
for each adapter will be displayed, with the associated DHCP and DNS information. If you use the
ipconfig command with the /release and /renew switches, the DHCP and DNS information for the
local computer will be updated. Figure 7-6 shows the output of the ipconfig /all command.

www.transcender.com

113
CompTIA SK0-002 Study Guide

Figure 7-6: Output of the ipconfig /all Command

The ipconfig command can use several parameters that provide different information on the network
interfaces within the computer.

The two ipconfig switches used to troubleshoot network problems related to DNS on a local computer
are as follows:

• /flushdns: purges DNS entries on the local computer, which may contain old or redundant
DNS entries in the cache. To delete the entries, you can use the /flushdns switch with the
ipconfig command.

• /registerdns: registers the DNS name and the IP address of a computer in DNS. If a
computer fails to perform a DNS registration, you can use this switch to manually update DNS
entries.

The ipconfig command is a valid command on Windows NT, 2000, 2003, and XP.

www.transcender.com

114
CompTIA SK0-002 Study Guide

The Ifconfig Command

The ifconfig command is a Unix and Linux command, and can be used on computers running Unix,
Linux, and MAC OSX Server (which has a BSD Unix core). The ifconfig command is used to configure
a network interface, usually at startup, and is also used to check the configuration at any time.

The syntax of the ifconfig command is as follows:

ifconfig [interface]

or

ifconfig interface [aftype] options | address ...

There are many parameters that you can use with the ifconfig command. Some of the important
parameters are as follows:

• -interface: specifies the interface as a drivername followed by a number. For example,


eth0.

• -address: specifies the IP address for the interface.

-netmask addr specifies the netmask. If this is not specified, then the default netmask based on the
IP address class is used.

Novell Netware uses the TCP/IP console (TPCON) for setting, viewing, and managing TCP/IP. The
console is menu-driven. The settings that you can change through TPCON include the TCP/IP address
for a particular NIC, the DNS address to be used, protocols, and their settings, if any.

The telnet Command

The telnet command can be used on Windows or Unix/Linux computers. It allows you to connect to a
remote computer on the network. On successfully connecting to a computer, your computer console
mirrors that of the remote computer as if you were working on the remote computer itself. Because you
are constrained only by permissions, you can run any command or utility on the remote computer exactly
as if you were locally logged in.

FDISK

Some versions of Windows include the FDISK utility, which is used for hard disk partitioning. This utility is
no longer available with Windows 2000 and later versions. FDISK is a command line utility that enables
you to create one primary partition and one extended partition on a hard drive. The primary partition can
be used as a boot partition. The extended partition could be further subdivided into logical drives, each of
which behaves like a disk partition under Windows. Logical drives cannot act as a bootable partition.

Windows 2000, 2003 and XP use the Disk Management utility that enables you to create four primary
partitions per drive. Figure 7-7 shows the disk management utility.

www.transcender.com

115
CompTIA SK0-002 Study Guide

Figure 7-7: The Disk Management Utility

Basic Hard Disk Tools

Most operating systems include a basic set of hard disk tools for performing maintenance operations.
Common hard disk tools are as follows:

o Format: a utility that installs a file system on a disk partition. The file system makes a disk usable
by an operating system.

o Defrag: a utility that is used to defragment a filesystem. With time, files on a drive no longer
occupy neat contiguous areas. Rather, files get broken up into pieces, with different pieces stored
on different areas on disk. In other words, files get fragmented on the disk. A defragmenter, or a
defrag utility, collects the pieces of a file, and writes them back together in contiguous blocks.

o Chkdsk - a utility that corrects logical and physical errors on disks. With use, some storage areas
on a hard disk get permanently damaged. This is normal, and the defect is known as a physical
error. The check disk utility detects such errors and flags them as damaged on control structures
on the disk. The operating system then considers the damaged areas unusable and does not use
them in subsequent disk operations.

Check disk also corrects logical errors on disks. A logical error is a collective term for errors, such
as orphaned clusters and cross-linked files. An orphan cluster is an area of disk which is flagged
“in use” while it actually is not in use. Cross-linked files are multiple files claiming use of the same
area of disk.

www.transcender.com

116
CompTIA SK0-002 Study Guide

Performing an Orderly Shutdown

Whenever a computer needs to be turned off, it must be shutdown using a formal procedure, rather than
just switching it off. The computer must save data and state information to disk so that the computer can
boot up subsequently in a stable and predictable manner. The shutdown procedure is different for each
NOS.

www.transcender.com

117
CompTIA SK0-002 Study Guide

Table 7-1 indicates the shutdown procedures for different NOSs.

NOS Shutdown Procedure

Windows Click Start ->Shut Down…

The Shut Down Windows dialog box is displayed.

Select Shut down from the drop down control, and then click OK.

Unix/Linux Type shutdown –t 180. The optional switch –t 180 instructs the operating system to wait 180
seconds before shutting down. The string “Server being shutdown. Please save files.” is an optional
instruction from you that will be displayed on users’ monitors.

You can type shutdown 5:00, and press <Enter>. This will shutdown the server at 5 o’ clock.

You can type shutdown –r, and press <Enter>. This will reboot the server.

You can type init -0, and press <Enter>. This will halt the server.

You can type init -6, and press <Enter>. This will reboot the server.

Netware Press Alt+Esc keys.

The Netware console screen is displayed.

Type down, and press <Enter>.

The following applies to Netware 6.5. At the Netware Console screen,

type down –f, and press <Enter>. The server shuts down immediately without a prompt.

Type restart server, and press <Enter>. The server is shutdown and restarted without exiting
to DOS.

Type restart server –f, and press <Enter>. The server shuts down immediately without a
prompt, and restarted without exiting to DOS.

Type restart server –ns, and press <Enter>. The server restarts without using the startup.ncf
file.

Type restart server –na, and press <Enter>. The server restarts without using the
autoexec.ncf file.

Type reset server, and press <Enter>. The server is shut down, and the computer warm boots.

Type reset server –f, and press <Enter>. The server is shut down immediately without a
prompt, and the computer warm boots.

www.transcender.com

118
CompTIA SK0-002 Study Guide

NOS Shutdown Procedure

Mac OSX Press Power key.

Table 7-1: Orderly Shutdown Procedures

Identify Bottlenecks

Many performance issues in servers can be traced to bottlenecks. In some cases, the component would
require upgrade. For example, a SCSI card being used for data transfer is a bottleneck due to its inherent
maximum speed rating. The only course of action here is to switch to a faster version of SCSI, provided
that the receiving equipment is compatible to the upgrade. On the other hand, if a traffic bottleneck is due
to other devices attached to the SCSI bus, the solution would be to remove some of the connected
devices.

One of the first things that you should determine is the function and role of the server. If you do not know
what is required of the server, you cannot determine what the load requirements will be.

In most cases, bottlenecks are corrected through load balancing, shutting down of certain services, or
rescheduling some tasks. The method used to address bottlenecks usually involves running a
performance monitoring tool and comparing against an older baseline.

The following objects are normally measured and recorded using performance monitoring tools:

• Processor utilization: This object reflects the utilization of the processor by various software
applications and the operating system kernel. The usage and changes of usage over time are
some of the aspects measured.

• Page file: This object reflects usage of the page file that serves as virtual memory. Virtual
memory is a method of using disk space to mimic RAM memory. Areas of RAM used by some
background process are swapped or written to the page file on the disk by the virtual memory
manager (VMM) component of the operating system. The RAM, thus freed, is used by active
applications. When the background process is activated again, the RAM contents are restored by
swapping back from the page file. The extent of use, and faults in swapping of the page file are
some of the aspects measured.

• Disk utilization: This object reflects the usage of hard disk storage. The extent of use, along with
usage speeds, are some of the aspects measured.

• Memory utilization: This object reflects the usage of RAM memory. The extent of use and the
changes in usage over time are some of the aspects measured.

• Network utilization: This object reflects the usage of the network. The traffic on the network and
the errors are some of the aspects measured.

Within each object, individual counters are defined, each representing a measurable aspect of the object.

In Windows, the performance monitoring tool is the Performance snap-in.

www.transcender.com

119
CompTIA SK0-002 Study Guide

Figure 7-8 shows the Performance snap-in.

Figure 7-8: Performance Snap-in

Figure 7-8 indicates curves of two processor-related counters and one memory-related counter. The
trends for appropriate counters that are measured by the tool can be logged in files and can be used as
the baseline.

A comparison of the counters logged under suspected bottlenecked conditions and the reference
baselines can reveal whether, and which, objects are the bottleneck.

Consider a server where you encounter the following readings for objects:

Processor utilization: 85%

Disk utilization: 23%

Memory utilization: 56%

These readings are the average readings over a two-hour period. During this period, about half the total
number of users on the network were accessing the server. There are extended periods during the
workday when almost all users simultaneously access the server. Looking at the readings, it is
reasonable to assume that the CPU usage will be a bottleneck in performance when all users
simultaneously access the server. This assumption should actually be confirmed by monitoring the CPU
load under peak usage conditions.

In case the assumption is supported by observation, a solution for the bottleneck will have to be found.
The solution may involve using a server cluster, switching to SMP, or upgrading to a faster processor.
CPU loads can also be reduced by relocating some services running on the server to another server.

www.transcender.com

120
CompTIA SK0-002 Study Guide

Consider another server that shows the following readings:

Page faults per second: 100,000

Processor utilization: 68%

Disk utilization: 75%

A high reading for page faults per second is an indication that the server is low on physical memory. In
this scenario, additional memory should probably be installed, provided the page fault reading remains
high most of the time. The high disk utilization reading in this scenario is usually fixed when more memory
is added.

In cases where the hard drives are actually the cause of performance problems, the problems can be
fixed by freeing up space on the drive or by upgrading storage space.

Processor Affinity

In SMP, it is generally preferable to run a particular thread on a specific processor because the processor
cache may contain thread-related information, which will speed up the processing. Processor affinity is a
number that indicates the preferred CPU for execution of the software process or thread. The indicated
CPU is then preferentially used to execute the thread.

www.transcender.com

121
CompTIA SK0-002 Study Guide

Review Checklist: Troubleshooting and Problem Determination


Discuss problem isolation.

Discuss use of hardware and software tools for diagnostics.

Discuss identification of performance bottlenecks.

Discuss processor affinity

www.transcender.com

122
CompTIA SK0-002 Study Guide

Disaster Recovery

www.transcender.com

123
CompTIA SK0-002 Study Guide

Disaster Recovery
Scope
• Discuss a recovery plan, including the need for a recovery plan.

• Discuss types of fault tolerance.

• Identify backup hardware devices.

• Identify backup and restore schemes.

• Discuss tape rotation schemes.

• Discuss hot and cold sites.

Focused Explanation

Businesses depend critically on the availability of their data. Non-availability of data, even for a small
period, can prove extremely expensive. Consider a scenario where the hard disk of a mission-critical
server fails but a backup of the data is available. In this case, the data is safe because the organization
has backed up the data. However, installing a new disk and restoring data from backup tapes will take
time. The organization will suffer a loss due to the time taken to get the server up and running again.
Consider a different scenario where the backup for the data is not available. This makes the situation
extremely expensive for the organization. Consider yet another scenario where data has been lost due to
a natural disaster, but the data backup is available. In this case, until the server and the facility are
replaced, the data cannot go back online.

The above are different aspects of a disaster. Each aspect requires a different approach and different
counter measures. For organizations, it is important to plan disaster recovery measures.

In fact, it is unusual to find an organization that does not have a disaster recovery plan. If a plan exists,
locate and read it. This will leave you prepared to carry out recovery processes in the event of a disaster.

Disaster Recovery
The two extreme ends in the spectrum in disaster recovery are as follows:

• A hardware failure or some other disaster destroys some data that needs to be available. The
challenge is to restore the data and get it online as quickly as possible. The disaster recovery
method may include using a backup/restore method that enables the quickest restore time.

• A natural disaster wipes out a facility where servers housing the data are located. The challenge
is to make the data available online as quickly as possible. The disaster recovery method may be
to invest in advance in a duplicate site that is ready to go online within minutes of shutdown of the
main site.

The planning for any kind of disaster recovery ranging between these extremes needs to balance the
criticality of the data, the down time that is acceptable in the event of a disaster, and the expense of the

www.transcender.com

124
CompTIA SK0-002 Study Guide

solution that will address the problem. A disaster recovery plan can consist of one or more of the following
methods: fault tolerance, backup and restore plan, offsite storage, hot and cold spares, and hot and cold
sites.

Fault Tolerance

Fault tolerance is the ability to continue working in the event of a failure of a single component. Fault
tolerance is a desirable attribute of any server component, including the WAN links for a network, power,
storage, and even for an entire server. With network servers, fault tolerance is a very important feature
because a failed server can result in a huge loss of data. In fact, the reason why servers are more costly
than desktop computers is due to the increased reliability and fault tolerant features of servers.

Link Fault Tolerance

Failure of the WAN links leads to a halt in communication and data transfer. Fault tolerance for WAN links
is generally provided through redundancy. Multiple connections working in parallel are the most common
solution. If one connection fails, the remaining connections will support the load. This normally leads to a
drop in the data transfer capacity until the failed link is active again. However, this is usually an
acceptable alternative to a completely failed communication link.

Consider an example, where an organization chooses to have a fractional T1 line for connecting to the
Internet, with two ISDN connections supplementing the main link. The network is configured for load
balancing the T1 and the ISDN connections. In case of failure of the T1 line, the ISDN connections would
ensure network connectivity. Users would detect a slowdown in data transfer rate, but the link to the
Internet would continue to work. When the T1 line is functional again, the load would be rebalanced.

Power Fault Tolerance

As far as the infrastructure is concerned, fault-tolerant power supply is a most desirable feature. If the
power supply to a server is interrupted even momentarily, then the server reboots. The services on the
server are not available until the server is rebooted. This is considered a major fault because there are
many server applications that should not be shut down at all. Most server installations are provided with a
power supply backup in the form of a UPS. In addition, some servers are provided with a hot spare
internal power supply unit (PSU).A hot spare PSU is a redundant PSU that is on stand-by to take over if
the main PSU fails.

Server Fault Tolerance

One way of ensuring that all server software services keep running is to ensure that the server keeps
running without shutting down. To ensure this, you can use two approaches. You can use a server that
has fault tolerant components. Such a server uses dual instances of components that are critical to the
operation. In the event that one component fails, a failover mechanism brings up the second instance of
the component. Another approach is to use several servers combined in a fault tolerant manner. Server
clusters are the predominant way of achieving this.

Fault tolerant components in servers include NICs, PSUs, memory modules, controller cards, CPUs, CPU
fans, RAID controllers, and so on. Usually the extra units are on standby and start working only when
component failure is noted and a failover is necessary. Monitoring systems on the server initiate the

www.transcender.com

125
CompTIA SK0-002 Study Guide

failover. Memory module fault tolerance is provided in a different manner. Special memory modules that
can store redundant data are used. These are ECC RAM modules. RAM is described in Objective 1:
General Server Hardware Knowledge, ECC vs. Non ECC vs. Extended ECC.

CPU fault-tolerance is provided with special servers that provide redundant CPUs. A server with
redundant CPUs provides the same functionality as a server cluster, with some advantages as well as
some limitations. Servers with redundant CPUs contain multiple CPUs, with only one CPU operational at
a time. In case of a failure, a failover CPU takes over. The failover is faster than that provided by a
cluster. A server with redundant CPUs also requires fewer software user-licenses compared with a
cluster.

Server clustering is of two types: stateful and stateless. The goal of stateless clustering is to provide
performance scaling and load balancing. Stateless clustering is used to enhance the scalability and
availability of server applications. Stateless clustering improves a server's performance to keep up with
the increasing demands of Internet-based clients as incoming client requests in stateless clustering are
distributed across the servers. Each server runs a separate copy of the desired server application. If a
server fails, then incoming client requests are directed to other servers in the cluster. Stateless clustering
is not of relevance in fault tolerance. Fault tolerance is achieved in stateful clustering. As in every type of
clustering, stateful clustering involves connecting multiple computers such that client computers and
users perceive only a single entity. Each server is a node in the cluster. Forms of stateful clustering are as
follows:

• Active/active clustering: All nodes participate actively in the cluster. If one node fails, the
remaining nodes take on the workload of the failed node.

• Active/standby clustering: Nodes are divided into active-standby pairs. If the active node in a pair
fails, then the standby node takes over. This is an expensive solution because the standby nodes
are usually idle.

Storage Fault Tolerance

One of the most failure prone devices in a server is the hard disk. If a hard disk in a server fails, then the
event is quite catastrophic. Without special fault-tolerant mechanisms in place, complete recovery is
rarely possible. In any case, even partial recovery from a disk failure is always time consuming.

Fault tolerance in disk storage is achieved through a scheme known as RAID. Several RAID levels are
defined, each providing a different level of fault tolerance. In the event of failure of a single disk in the
array, the RAID controller regenerates the lost data using the redundant data stored on RAID. The RAID
controller itself is sometimes offered in redundant configuration. The redundant configuration consists of
two controllers meant to be used in a server-cluster. RAID is discussed in the RAID section in Chapter 1.

www.transcender.com

126
CompTIA SK0-002 Study Guide

Backup and Restore

Backup has been discussed earlier in Objective 5: Proactive Maintenance, Backup.

The following types of hardware are commonly used for backing up data:

• Digital Audio Tape (DAT) drives: These are devices that use tape media including 4 mm and 8 mm
DAT tapes. Storage capacities range up to 20 GB for a DDS-4 tape, going up to 35 GB for a DAT-72
(DDS-5) tape. DAT drives are low cost, high performance, and high reliability devices.

• Stationary DAT (SDAT) drives: SDAT is an obsolete DAT standard, which used a very high-speed tape
transport to increase data rates. The technology was later used in digital linear tape (DLT).

• DLT drives: These are devices that use tape media. Capacities up to 320 GB are available in a single
tape. DLT drives are high performance but expensive devices.

• Super DLT (SDLT) drives: These are an enhancement on DLT drives, offering better performance, and
larger storage capacities per tape cartridge while remaining backward compatible to DLT. SDLT
provides storage capacities of 110 GB for SDLT 110/220 drives, 160 GB for SDLT 320 drives, and 300
GB for SDLT 600 drives.

• Linear Tape Open (LTO) drives: These are devices that use a small single reel cartridge tape media.
Capacities up to 400 GB are available in a single tape. Ultrium attempts to be an ‘open standard’ which
implies that tapes recorded on one drive will be fully readable on another drive. Currently Ultrium offers
the fastest performance and the largest storage capacities of other popular backup devices.

• Advanced Intelligent Tape (AIT) drive: The AIT tape format, promoted by a consortium that includes
SONY Corp., can store 200 GB per tape. AIT 3 offers speeds between DLT and SDLT drives. The AIT
data cartridge has a built-in memory chip that stores data describing the information stored on the
tape, and the number of times the tape has been reused. AIT drives tend to be expensive.

• Tape libraries: These are sets of tape drives. An automated mechanism loads and unloads tapes to
and from the drives. Capacities of tape libraries range from a few hundred GB to 5 Petabytes (PB). 1
PB = 1,000,000 GB. The tape library has the following advantages over a stand alone tape drive:

o Automated tape handling: A robotic system stores and loads tapes, thus freeing the
administrator from the task of handling tapes manually. Tapes are changed automatically both
for backup and restore operations.

o Automatic bar code marking: Some tape libraries include the feature of marking tapes with
barcodes. This reduces errors in off-site storage processes.

o Scalability: A tape library can be started with a few drives and a few slots for tape storage. As
requirements increase, drives and storage slots can be added.

o Centralized control: A tape library allows centralized backup for all resources on a LAN.
Multiple drives within a tape library enable writing and reading in parallel. This increases
performance.

www.transcender.com

127
CompTIA SK0-002 Study Guide

However, a tape library tends to be very expensive compared to a stand-alone tape drive.

• Optical Drives: These are disk drives that include magneto optical (MO) drives and CD-Rewritable
drives. Capacities range from 600 MB to 5 GB per disk. Speeds of MO drives are slightly slower than
that of SCSI hard disks drives. Coupled with the direct access capability typical of a disk drive, MO
offers performance and convenience unmatched, with the exception of LTO drives, by other backup
media. The low capacities of MO cartridge, and cost per MB of stored data are shortcomings of this
media. The life of MO media is reportedly very high, approaching 100 years if stored carefully.

Note: The capacities of the drives indicated are their native capacities, indicating the amount of
uncompressed data that each drive can store. The capacity usually doubles if data is compressed while
storing. Unfortunately, most manufacturers of drives and cartridges quote only the compressed capacity,
since it is the more impressive figure. The ‘real’ capacity of any drive or tape is the native, or
uncompressed, capacity.

Traditionally, tapes have been the backup medium of choice, due to their low cost per megabyte. Disk
drives are now being used for backup purposes. The falling cost of ATA drives in the last few years has
encouraged this trend. Tapes still retain their price advantage over disks, and also remain easier to store
offsite because they are more portable. However, for backing up data that need not go offsite, disks have
the advantage of providing random access and extremely high data rates. ATA disks in RAID
configurations are a popular choice for backing up data.

The media used in backups also have a reuse pattern associated with them. The reuse pattern is called
the media rotation scheme often referred to as tape rotation scheme.

Tape Rotation Scheme

The most common rotation scheme is called a grandfather, father, son (GFS) rotation scheme. Originally
designed for tape back up, it works well for any hierarchical backup strategy. The basic method is to
define three sets of backups, such as daily, weekly and monthly. The daily or Son backups are rotated on
a daily basis with one graduating to Father status each week. The weekly or Father backups are rotated
on a weekly basis with one graduating to Grandfather status each month. Often one or more of the
graduated backups is removed from the site for safekeeping and disaster recovery purposes.

Restorations

It is crucial to test restoration of backed up data periodically. Testing the restoration process is as
important as the backup process.

Restoring a full backup requires using the full backup tape. Restoring an incremental backup set is more
complex. The last full backup tape must first be restored. Then all incremental backup tapes subsequent
to the full backup must be restored. This makes the incremental backup scheme slow in the restoration
process.

Restoring a differential backup set is simpler. The last full backup tape must first be restored. Then the
last differential backup tape must be restored. The differential backup scheme is faster in restore
compared to the incremental scheme, although creating the differential backup set takes longer than
creating the incremental backup set.

www.transcender.com

128
CompTIA SK0-002 Study Guide

Offsite Storage

The best backup practices in the world will not help if backup tape sets are destroyed by the very disaster
from which you are trying to recover. To protect against such eventualities, backup sets can be stored
offsite. The storage site should be secure to ensure that your data is protected. The backups that are
stored offsite should be current enough to be useful.

Hot and Cold Spares

Hot and cold spares are reliability and disaster recovery aids. A hot spare is a drive that is preconfigured
as a spare drive and connected in a RAID array. If a single disk in the array fails, the hot spare takes over
automatically. The data on the failed drive is regenerated from the redundant data and saved onto the hot
spare, which then takes over the functions of the failed drive. Usually, when the failed drive is repaired
and replaced in the array, it is designated as a hot spare.

A cold spare is a drive that is kept aside but reserved for installation in a RAID array instead of providing
automatic failover.

Hot, Warm, and Cold Sites

Hot and cold sites are replacements for a primary site, which will be used if a disaster brings down the
primary site.

A hot site is a facility rented from a vendor, which has the entire infrastructure in place to take over
operations in case of failure of your site. The vendor provides all the equipment, including computers and
servers, at the hot site. A hot site uses expensive software to mirror the state of the original site onto the
hot site. In case the original site goes down, the hot site can be up and running with practically no data
loss.

A warm site is like a hot site, except that it does not mirror the state of the original site as closely as a hot
site. The backup process on a warm site occurs at short intervals. This enables a warm site to be less
expensive as compared to a hot site. However, any data generated between the last backup event at the
warm site, and the occurrence of the disaster will be lost. The cost of maintaining a warm site rises as you
shorten the backup interval.

A cold site is also a rented facility, but the vendor does not provide the hardware. It is up to you to install
the hardware to make the facility operational. The cold site also does not mirror the original site. In case
of a disaster, data must be restored on the cold site before the site can function.

An organization will use either a hot, warm, or cold site depending on the requirement. A hot site is used
only for operations that are vital for an organization. A cold site takes longest to make operational.

www.transcender.com

129
CompTIA SK0-002 Study Guide

Testing the Disaster Recovery Plan

A disaster recovery plan consists of many processes, each of which must work successfully so that the
overall disaster recovery is successful. If any of the processes fail, then the entire disaster recovery
process will probably fail.

In order to ensure that a disaster recovery plan will work when a disaster actually strikes, you must test
the plan to validate its effectiveness.

For fault tolerant hardware, you may wish to test the effectiveness of the failover mechanisms by
simulating hardware errors. The simulation can be done manually, for example, by manually switching off
a server in a cluster, or through software, for example, by disabling one of a team of NICs through
software. All standby and failover devices should be tested to ensure that they are able to provide the
services that are expected of them.

For backup and restore mechanisms, the backup sets must actually be restored and checked out
periodically. Practice runs automatically test all the possible points of failure in the backup and restore
process. Points of failure in a backup and restore process are as follows:

• Faulty media: Magnetic tape or other media used in backup may actually contain faults that
prevent successful restoration. Media faults may be due to manufacturing defects, wear and tear
through extended usage, incorrect storage practices, and damage caused by faulty backup
drives.

• Defective backup hardware: The tape drive that is used for the backup and restoration may have
defects that prevent correct restoration.

• Incorrectly configured software: The backup and restore software may be configured incorrectly.
This may prevent correct restoration.

• Incorrect storage practices: The backup tapes may be stored under conditions that cause data
corruption. Storage areas must be shielded from magnetic fields to the extent indicated by the
tape vendor. Temperature and humidity must also be controlled so that the tape remains usable
through its rated lifespan.

For offsite storage, the storage facility must adequately protect the tape and its data. Additionally, the
facility must deliver the backup tapes on demand to your organization within a certain period of time. The
period of time will be indicated in the service agreement that you have with the offsite storage facility.
Testing should confirm both these aspects.

For hot and cold spares, the components must work when required. The testing should check and ensure
the spares are able to provide the functionality expected from them.

For hot and cold sites, your site must be operational within the required time. The required time is the
downtime that you can tolerate as per your disaster recovery plan. Testing should ensure that site
downtime does not exceed the planned downtime.

www.transcender.com

130
CompTIA SK0-002 Study Guide

Review Checklist: Disaster Recovery


Discuss recovery plan, including the need for a recovery plan.

Discuss types of fault tolerance.

Identify backup hardware devices.

Identify backup and restore schemes.

Discuss tape rotation schemes.

Discuss hot and cold sites.

www.transcender.com

131
CompTIA SK0-002 Study Guide

Test Taking Strategies


The Server+ credential identifies individuals who are competent in installing, administering, and
maintaining server computers and operating systems. Candidates are encouraged to have at least 18
months of server administration experience.

The Server+ exam is a proctored exam, which may be taken at a Prometric or VUE testing center.

CompTIA Certification Roadmap


The SK0-002 Server+ exam is the only exam required to achieve Server+ certification. For more
information on this exam and certification, see http://www.comptia.org/certification/server/default.aspx.

A CompTIA Server+ candidate should combine training with on-the-job experience. Many of the exam
questions are based on real-world scenarios so hands-on experience with networking is vital.

Registering for the Exam


An exam candidate may register for the SK0-002 at one of the following sites:

http://www.vue.com

Or

http://www.prometric.com

Resources
The CompTIA Server+ Preparation Guide contains study materials that you may use to prepare for this
exam. These resources include exam objectives, sample tests, and training materials. For more
information, see the Server+ Preparation Guide at
http://www.comptia.org/certification/server/prepare.aspx.

Test Day Strategies


The most important test day strategy is be thoroughly prepared for the exam beforehand. You must know
the material. Cramming the day of the exam is not a good strategy to use for any type of test, especially
certification exams.

www.transcender.com

132
CompTIA SK0-002 Study Guide

General Tips

• Schedule your exam only after you are confident that you have mastered the subject matter.

• Schedule your exam for a time of day when you perform at your best.

• Eliminate all distractions from your testing area.

• Allow 4 hours to complete the registration and exam.

• Eat a light meal beforehand.

• Everything you do has time limitations, so do not let the pressure overwhelm you.

Server+ Specific Tips

Before starting the exam, review your short stack of reserved flash cards or personal study notes to
remind yourself about terms, topics, and syntax that are likely to appear on the exam.

Once you have entered the testing room, you can write down any useful charts that you have memorized
on the scratch paper or erasable board provided by the testing center.

Determine how much time you are allotted to answer the questions. Do not spend too much time on a
given question during your first pass through the exam.

Remember that if you take a break during the exam, the time clock continues.

If you are disconnected during the exam, you will be able to resume where you left off.

Test Item Types


The Server+ exam contains multiple-choice items. While knowing the technical content for this exam is
the most important step you can take to pass the exam, you should nonetheless understand the
methodology of the item type. Following a strategy of how to answer each type can mean the difference
between passing and failing.

Multiple-Choice

Read each multiple-choice item with the intention of answering the item without the alternatives that
follow. If you focus on finding an answer without the help of the alternatives, then it will increase your
concentration and help you read the question more clearly..

Understand that multiple-choice items with round radio buttons require a single response, and multiple-
choice items with square radio buttons require one or more responses. If more than one response is
required, pay special attention to the “directive” sentence of the question (“Choose two."). This may
indicate the number of correct answers.

Use the process of elimination when you do not know the correct answer. If the question has a single
answer and four options are listed, eliminate two of these options quickly and make the decision between

www.transcender.com

133
CompTIA SK0-002 Study Guide

the two that remain. This increases your probability to 50/50. Another helpful methodology is to identify a
likely false alternative and eliminate it. This elimination method is particularly helpful when the item
requires more than one answer.

When two very similar answers appear, it is likely that one of them is the correct choice. Test writers often
disguise the correct option by giving another option that looks very similar.

You can download a free demo on our Web site that mimics the types of questions that will appear on the
exam. Sample questions do not cover all the content areas on the exam.

www.transcender.com

134
YOUR GUIDE TO SUCCESS AND ACCOMPLISHMENT

Need to prepare for that career-accelerating IT certification exam?


With Transcender you will walk into the exam knowing you’re prepared.
IT professionals agree! Transcender has consistently been voted the industry's #1 practice
exam. Now we have a suite of products to help you with your IT certification exam preparation.

Whether you’re looking for skills assessment, learning opportunities or exam preparation
products, Transcender has a learning solution geared toward the demands of the IT
professional.

Visit www.transcender.com/products/ to discover all of the Transcender learning solutions


available.

TranscenderCertTM Transcender e-Learning


Industry-Best Practice Exam Software Quality Online Training Courses
TranscenderCertTM products are our industry- Transcender has partnered with global
best exam simulations that provide realistic enterprise-learning leader, Thomson NETg®,
simulations of IT certification exams. Featuring to deliver high quality e-Learning courses to
built-in Flash Cards, our award-winning you. With these courses you can be sure
TranscenderCertTM exams are known as the you're completely prepared to conquer even
most comprehensive and realistic available. the toughest IT challenge and you can feel
We even offer a Pass-the-First-Time confident that you have the extra edge you
Guarantee. need to pass your certification exam.

TransTrainerTM Transcender Study Guides


Computer-Based Training Videos Focused Document-Based Study Aids
Transcender's computer-based training videos Transcender’s Study Guides are objective-
provide concise, comprehensive and step-by- driven and contain a variety of tools to help
step, introductory explanations of the exam you focus your study efforts. Each Study
topics with on-screen demonstrations and real- Guide contains a scope and focused
world examples. You will learn how to perform explanation with definitions, in-depth
the actions covered by the exam and focus on discussions and examples to help you prepare
learning the key topics needed to pass. for your certification exam.

© 2005 Transcender, a Kaplan IT Company. All rights reserved. No part of this study guide may be used or
reproduced in any manner whatsoever without written permission of the copyright holder. The information contained
herein is for the personal use of the reader and may not be incorporated in any commercial programs, other books,
databases, or any kind of software without written consent of the publisher. Making copies of this study guide or any
portion for any purpose other than your own is a violation of United States Copyright laws.
Thomson and NETg are registered trademarks of The Thomson Corporation and its affiliated companies.

Transcender
500 Northridge Road, Suite 240
Atlanta, Georgia 30350
Tel. 678-277-3200, or 1-866-639-8765 Toll Free, U.S. & Canada.

Anda mungkin juga menyukai