Anda di halaman 1dari 49

IT management best practices for advanced network administration Lessons

1. Implementing network configurations and change control Senior systems administrators are often responsible for the overall organization and controls of the systems a business operates. In this lesson, you'll explore some processes to provide change and configuration controls. 2. Implementing a network security policy The greatest challenge in using security measures is keeping your networked systems safe from threats while maintaining the systems' functionality and productivity. In this lesson, you'll examine some ways to increase your network's security.

Systems and network administrators wear many hats throughout the day. From initial configurations to automating routine tasks to monitoring and security, administrators must cover it all. In this class, you'll learn advanced systems and network management best practices, which encompass change control, security, task automation, email management, planning for disasters and more.

3. Implementing automation of administrative tasks via scripting One thing most network administrators have in common is a lengthy list of tasks that must be performed repeatedly. In this lesson, you'll examine different types of scripting to tackle these sorts of tasks. 5. Managing email services and utilization Email is a vital service to network users, thus it's a critical system you need to maintain and keep operating in optimal condition. In this lesson, you'll examine ways to manage, protect and secure this vital business resource.

Welcome

Implementing network configurations and change control


The evolution of business needs in recent years has greatly changed the landscape that the systems and network administrator operates within. Whereas 10 years ago an Information Technology team may have been responsible for 6 to 12 servers and applications, for example, similar infrastructures today have swelled to 25 or more servers with larger and more demanding applications and networking requirements. How do IT professionals, bound by budgets and expectations, deal with this overwhelming load of network administration and network monitoring? One solution is to streamline your IT environment. This class addresses a wealth of best practices for administrators to complete work more efficiently, incorporate automation when possible and tighten up processes. You'll learn methods of standardization, automation, monitoring and management that will help your organization save time and money.

4. Implementing a proactive network monitoring infrastructure In today's complex and fast-growing IT infrastructure, you must have a proactive monitoring system in place. In this lesson, you'll examine what you can do to stay informed of your systems' status and prevent service interruptions. 6. Creating and implementing a data disaster recovery plan The best way to recover from a disaster is to be prepared before it happens. In this lesson, you'll learn how to create a data recovery plan that'll have your critical systems up and running as quickly as possible after a disaster strikes.

Senior systems administrators are often responsible for the overall organization and controls of the systems a business operates. In this lesson, you'll explore some processes to provide change and configuration controls.

Here's what to expect in the lessons:

This class is designed for systems and network administrators in small to medium-size businesses (SMBs) with between 10 to 999 employees. The material presented in this class, however, should be useful for administrators in environments of all sizes.

Each lesson is accompanied by a brief quiz and assignment. It's to your advantage to complete these elements to reinforce important concepts and techniques presented in the lessons and to get the most out of the class.

Lesson 1 explores why businesses must accurately track and understand their IT systems. Standardization, change control processes and asset inventory management are solutions that keep costs down and performance up. Lesson 2 walks you through security systems solutions and best practices. You'll examine the principle of least privilege, in addition to access control systems and security groups. Testing and auditing techniques are also covered in detail. Lesson 3 covers task automation via scripting. You'll examine three of the most commonly used scripting languages on the Windows platform: Visual Basic Scripting Edition (VBScript), PowerShell and KiXtart. Lesson 4 dives into the more advanced concepts of systems monitoring, which give you insight into how your systems are actually performing. Lesson 5 examines the management of email services. You'll examine basic administrative processes and take a look at spam and virus control. Lesson 6 wraps up the class with a review of disaster recovery planning and implementation. You'll start with an overview of backup fundamentals, and then move into implementation of a sound backup and restore plan for your organization.

The case for standardization

Let's get started with the topics in Lesson 1.

This class focuses on systems and network administration of a Windows environment. Therefore, services and applications such as Microsoft Active Directory, Microsoft Exchange Server and Microsoft SQL Server are the primary focus from the software side. From the hardware side, you'll learn about rack-mounted servers and bladed solutions, modular storage array enclosures and similar equipment.

With the explosion of security and access control requirements, and the reliance on email as a business-critical communications tool, server counts have skyrocketed. The environment that supports and works with serverspower, cooling, network connections, storage and backup deviceshave also grown more complex. Because administrators are responsible for keeping the infrastructure operational to meet employee and business needs, standardization is necessary to reduce inadvertent errors and maintain flexibility.

Server management

Understanding standardization

Standardization refers to deploying servers and related systems into a production environment with the same initial configuration, which usually includes the operating system, antivirus solution, patches and updates.

HP ProLiant server management software is designed to help you more effectively manage servers both in your office and at remote locations so you can manage your organization's technical operations more easily.

In addition, standardization requires adaptability to changing business needs or

technology solutions. For example, it's impractical to implement a standard for building servers one year and expect it to be applicable 5 years later. As hardware, software and the overall needs of the business change, your standards must also change. Changes don't invalidate or eliminate the need for standards in the first place. You need to be rigid in your implementation of existing standards and flexible in your ability to change the standards as the current situation dictates.

HP Insight control management software portfolio

A standardization plan should include guidelines for performing system builds in a repeatable manner. The tendency in many organizations, however, is to build new systems as they come in for deployment, with little to no standardization. This lack of standardization can be resolved with prior planning and training.

Prior planning

There's a common saying, "Fail to plan, plan to fail." This is certainly true when it comes to standardization of business practices and implementation procedures, especially with system builds and implementations.

Lack of standardization can cause a variety of problems, from minor to substantial. It might result in a system without all of the proper configuration changes, or one that isn't fully patched and therefore prone to viruses and worms. Other possible outcomes include a system that performs its function but not optimally, or a system with random, untraceable service interruptions that seem to correct themselves without any administrator intervention. Although you can't overcome all of these issues by implementing a standardsbased build and deployment process, you can resolve most of them. At worst, you might have three of four different versions of your standard over a period of time versus every server system deployed differently from the others.

Training

Proper training ensures everyone on the IT team has the required minimum level of knowledge to perform their functions properly. Training also provides the opportunity for question-and-answer sessions among IT personnel, the outcome of which can precipitate necessary changes to build guidelines.

Attaining standardization

One method of achieving standardization is to focus on three aspects of systems management: Standardized build guidelines Change control processes Asset inventory management

Collectively, they give you a complete picture of what you have, where it is and why it's configured a certain way.

Documenting and implementing system build guidelines

The remainder of this lesson addresses each of these components in detail, beginning with standardized build guidelines.

Organizations often have difficulty creating a server system build guide, usually because they don't know where to start or who to involve. Those issues are easily solved: start by holding a meeting with everyone responsible for building, deploying and supporting the server systems to discuss the current processes that each person follows. Assign a person the task of creating build guidelines, and then have that person gather information from meeting participants and begin creating detailed steps that must be completed.

Determining which information to include in build guidelines


Naming conventions

Although each company's build guidelines differ, most guidelines should include the same general information as described in the following sections.

Use standard naming conventions to control names and as a means of quickly identifying a server's function. For example, a server name of MEMDC31 indicates it's an Active Directory domain controller (DC) located in the Memphis (MEM) data center and assigned "31" as a unique identifier. If the server's name is YODA, you can't determine by name alone its function or location.

Default operating system

Your choice of server operating system is largely dependent on the server's ultimate role. However, selecting a default operating system helps to standardize a server environment, and makes licensing easier and more costeffective. For example, if you can meet your business needs using Windows Server 2003 Standard Edition, you don't need to spend additional money deploying Windows Server 2003 Enterprise Edition on every server. The right server operating system also depends on your hardware needs. For example, 32-bit systems have a random access memory (RAM) limit of 4 gigabytes (GB), but 64-bit systems enable you to escape the 4 GB RAM limit. Determine your hardware requirements first to narrow your operating system choices.

Default components and features

A best practice is to install only necessary components on a system initially, and then install additional components based on a server's role. For example, don't install Microsoft Internet Information Services (IIS) on every server routinelyjust on web or .NET servers. Whatever combination of components and features you decide are foundational, make it a standard and implement it across all of your initial server builds. Then modify each configuration with specific applications.

Server labeling and cable management

Proper labeling lets you quickly determine key pieces of information about a server system. At a minimum, label every serverfront and backwith its name, asset tag and serial number. Other items that are helpful to include are the static Internet Protocol (IP) address and primary contact's name and phone number.

Similarly, cable labeling is highly useful and becomes increasingly important as you fill entire server racks. Label both ends of all cables running to the server for easy identification and to prevent accidental pulling of the wrong cable. In addition, proper cable installation keeps each rack orderly, so include expectations for cable installation and management in your system build guidelines. Best practices for installing and managing cables in a server rack are:

Responsibility for review and approval of server systems

Enforcing a mandatory review and approval phase in the build and deployment of every server system can reduce errors from making their way into your production environment. At a minimum, require at least one review of each new server system by an administrator who didn't participate in the build, prior to placing the server into production.

Use cable management arms to reduce stress on cables and keep them organized. Don't use more cable than needed, especially network cables. For example, if you need a 30-foot network cable between your server and switch, don't use a 50-foot cable. The extra cabling has to go somewhere, and it'll make a mess wherever it ends up.

Compiling and distributing guidelines

Once you've gathered all of the necessary information and established standards and procedures, you need to fully document them in a way that's easy to follow. Many organizations prepare a Microsoft Word or Microsoft Excel document with a build and deployment step-by-step process. The level of detail in build guideline documents varies from one business sector to another. For example, military guidelines are generally highly specific and leave no room for individual decision making. In other industries, the guidelines are simply a list of items that must be completed, enabling systems administrators to use their own discretion in certain circumstances. Ensure the final product is easily accessible to all administrators and technicians who need to use them. You could distribute hard copies to each administrator annually, and keep a "live" version on a network server.

Reviewing and updating build guidelines

As mentioned, system build guidelines will change over time as hardware,

software and business needs change. Once you've established and distributed a set of build guidelines, someone must be responsible for: Assign this task to a specific person or group, set up a timeline for reviews and make it an item to discuss at regular operations meetings. Now that you know how to create and manage system build guidelines, the next logical step is to examine change control. Evaluating the guidelines and making required changes Ensuring they're followed Ensuring that all impacted systems administrators are aware of the most recent version

Implementing a change control process

Most server systems don't remain in the same state throughout their lifecycle. In fact, general-use systems shouldn't remain static, even if the only changes are the installation of routine patches and updates for the operating system and applications. Some server systems are installed, put into production and rarely touched again. These systems tend to be isolated (off the network) and dedicated to a specific business application. Examples include systems that control and operate specialized industrial machinery, and biological or chemical instruments.

Just as you've taken steps to ensure your server systems are built and deployed from a standardized approach, you need to develop a process to track and approve changes. This is referred to as a change control process.

Understanding the change control process

Change control processes vary widely between organizations; however, they all share a few common features: A change control process ensures that all responsible and impacted parties are aware of changes to be made and why. Change control consists of two basic files: You can create and maintain a change control log using an Excel file and a change request form created in Word. The change request form should be completed by the person requesting the change, which isn't always a systems administrator. Consider the possibility in which a developer in your organization wants to include a new menu option in an application or a new input field on a web form. Changes will need to be made to the underlying application code and thus the developer would be the logical employee to complete and route the change request form for approval. Change request forms should contain the following basic information, at a minimum: Server system name System function (that is, the application or service it provides) The change control log The change request form Changes are tracked and recorded in a central location. Changes must be approved by someone. Changes must be implemented by someone.

The same information should also be present in the change control log. However, the change control log should contain a record of all changes made, whereas the change request form is used for a single change.

Participants

A description of the change along with full documentation to support the change The business impact during the change, such as the application being unavailable in whole or part The estimated length of time to complete the change The date and time when the change will be made A testing plan to validate the change was made successfully The procedure for backing out the change if the change is unsuccessful Name and signature of the person requesting the change Name and signature of the person who will make the change Name and signature of the person approving the change Name and signature of the person who tests and validates the success of the change

You must involve many employees within the business to successfully implement a change control process. The following list represents some of the more common groups involved:

Implementing an asset inventory process

Using a change control process can be tedious. However, to know the status of each server system at every stage throughout its lifecycle, and to help quickly diagnose sudden system instability issues, you need to use a change control process along with system build guidelines. The final piece of the systems management triangle is the asset inventory process. The next section covers this topic in detail.

These groups of employees collectively make up a "change control board" or "change control group." Each group is critical to the overall change control process.

Systems administrators: They submit their own change requests (for example, to apply a service pack to an application), and they complete most requests because they have the proper level of administrative permissions on the affected systems. Application developers and analysts: They submit change requests and provide the relevant application code update to a systems administrator. Training and documentation specialists: They might be involved if a change will impact system operations or the end-user experience. For example, if a developer wants to include a new menu option in a business application, someone needs to educate the end users about the change and how to use it. Managers and team leaders: They often provide signatures of approval on change request forms and ensure that each request has been logged in the change control log. Directors and higher level management: They generally review the change control log and discuss past and future changesand their associated outagesduring regularly scheduled meetings.

Asset inventory management is a formalized process by which you track physical assets, such as server and storage systems, from the time they arrive at your facility until they're transitioned through disposal or resale. You can purchase an asset inventory software application to track assets, or use a Microsoft Access database or Excel file.

Adding to your server infrastructure

HP ProLiant ML server line provides all the necessary server features in an

Some organizations combine build guidelines with their asset inventory solution to provide a single source of information about each piece of equipment from cradle to grave. As part of entering an asset into the inventory tracking solution, you should complete a series of signoffs indicating the build steps completed. This information is then available to anyone currently in systems support and in the future.

affordable, functional package. HP ProLiant ML110 G5 server series

Whichever approach you choose, you should include the following items in your asset inventory: Server system name and IP address Server system asset tag, which is usually assigned by financial services for depreciation tracking Server system serial number and part number for easier warranty verification during support calls Server system model, such as ProLiant DL380 G5 or ProLiant BL460c System priority, such as: Tier 1: Any problem that requires a 2-hour response Tier 2: Any problem that requires an 8-hour response Tier 3: Any problem that requires next-business-day response System usage, such as production, development, testing, disaster recovery, retired or not in use Contact name and phone number for the responsible systems administrator or group, and application administrator or group Server system purchase date, warranty start date and warranty end date Server system retired date (to be entered when a system is retired) Server system disposal date (to be entered when a system is physically disposed of via resell or a disposal process) Application(s) installed and function Building where server system is located and specific server rack Enclosure the server or storage blade is installed in, if applicable Power distribution unit circuit number that feeds the server or server rack, enabling you to determine which systems might be impacted when a power circuit fails in the data center Version and service pack level of operating system (for licensing) Operating system and other application licensing status (for licensing renewal and compliance) Date system was put into production, and the name of the administrator putting the system into production Name of administrator or manager who reviewed the system prior to release into production Backup status and configurationwhich application provides backups and how often they're performed

Server buying guide

Before implementing an asset inventory management solution, talk to all interested parties to ensure that all desired data fields will be included. As part of the initial implementation, train all employees who will use the system to ensure everyone is up to speed at the same time.

This should be entered for the operating system and any applicationspecific backups, such as Exchange Server.

Moving on

Assignment #1

In this lesson, you learned methods of standardizing server systems, and the importance of build guidelines, change control processes, and a properly maintained asset inventory management system. In Lesson 2, you'll learn security basics and best practices. Before moving on, complete the assignment and take the quiz for this lesson.

In this assignment, you'll research standardization information and then begin creating build

guideline documentation. Follow these steps:

Researching standardization

Creating standards and documentation

Quiz #1
A) B) A) B) A) B) A) B) C) D)

Analyze the system build procedures in your organization, and then create draft guidelines for standardized processes. If you already have build guidelines in place, review them for structure and accuracy. Server serial numbers

Wikipedia.com: Configuration management Informit.com: What is Configuration Management? Compile notes about significant features and details as you visit each website.

1. Using a web browser, search for understanding inventory configuration IT asset management and read two of the top five hits. 2. Visit the following websites and browse the information:

Question 1: When planning a standardized system build guideline, which items should you include? (Check all that apply.) Question 2: True or False: If you have system build guidelines in place, you can eliminate the need for a server system review prior to deploying it into a production environment. Question 3: Who should be involved in implementing a change control process? (Check all that apply.) C) D) C) D) Managers and directors Systems administrators Application developers Training and documentation specialists True False Base components and features to be installed The default operating system Type of data protection (backup and restore) to be performed

Question 4: An asset inventory management process can provide which types of information? (Check all that apply.)

Understanding security risks

Implementing a network security policy


Naming conventions

Who made the last change to the server system using the change control process Server system warranty expiration dates Who should be contacted if a problem or question about the server system arises

The greatest challenge in using security measures is keeping your networked systems safe from threats while maintaining the systems' functionality and productivity. In this lesson, you'll examine some ways to increase your network's security.

Welcome back. In Lesson 1, you learned methods of standardizing servers and related systems, and the importance of build guidelines, change control processes and a properly maintained asset inventory. In this lesson, you'll learn security basics and best practices. The first topic you'll explore is risk, and how to categorize the security risk involved when an account is compromised. Security best practice is more than a laundry list of dos and don'ts. It's a conceptual structure that organizes the tasks necessary to establish, maintain and modify the procedures involved in the security of your network and server systems.

Multi-faceted server

Every company should have a security plan in place that outlines best practices, expectations, and details about the level of security required for systems and networks. Before you begin configuring security, however, you need to understand the levels of risk represented at different access points to the network. Although a risk analysis doesn't uncover every vulnerable point or account for every conceivable method of attack, it can help you pinpoint likely entry points, assign a risk level and suggest an appropriate means of defense.

A server from HP's line of ProLiant servers can perform in any role your infrastructure requires, from a file server to a domain controller and beyond. HP ProLiant DL320 G5p server series

Categorizing risk

You can categorize risk levels in a variety of ways. For purposes of this class, risk is categorized at levels 1, 2 and 3with 1 being the highest risk and 3 the lowestand described as follows: The definition of "excessive amount of time" varies depending on how much income an organization loses for each hour of system or network unavailability. For example, how much would it cost Amazon.com if its database systems were unavailable for an hour or two?

The most vulnerable elements within a network are network services, resources such as Domain Name System (DNS) or authentication, and data.

Risk level 1: Systems and data accessed or damaged at this level would result in a catastrophic loss of productivity and security to the organization. The lost data and services would take an excessive amount or time and effort to recover and restore, or might be impossible to restore, and services and equipment require major repair or complete replacement. Risk level 2: Systems and data accessed or damaged at this level would represent a moderate impact on productivity and security. Any lost or damaged data or compromised systems require an intermediate amount of time and effort to restore and recover. The damage would be significant but not catastrophic.

For example, consider a situation in which data saved to a file-sharing server over the last 24 hours was corrupted. This is the equivalent of someone accessing a folder of electronic marketing materials from several vendors.

Considering the impact of disruption

Risk level 3: Systems and data accessed or damaged at this level would be fairly easy to restore. If these areas were attacked, or if data were lost, it wouldn't represent a significant loss of productivity, business security or profit.

It's also important to consider the level of impact that disruption of network services would have on the business environment. Risk levels correspond to how much the business depends on access to these services or applications. Here are some examples of typical network services and applications:

Applying the principle of least privilege

Now that you understand security risks, let's take a look at methods to enforce stronger security in your IT environment. The next section covers one of the essential security measures every company should use: applying the principle of least privilege.

Email is usually considered a level 1 risk by an organization's executives and users. In many businesses, email has supplanted the telephone as the primary means of business interaction, internally and externally. The compromise of a firewall, router or core layer network switch is considered a level 1 breach. Data and database servers can be risk level 1 or 2, depending on how significant that information, and access to that information, is to the customer. Server systems that provide Dynamic Host Configuration Protocol (DHCP) or DNS are a good example of a level 2 risk. Although the temporary loss of those services would make a significant impact, they can be recovered in a reasonable amount of time.

The principle of least privilege is used to restrict access to data and systems. It requires that an administrator grant the minimum level of permissions to each user that still enables her to perform her job efficiently and correctly. There are no predefined guidelines you must follow, and the principle applies to ordinary users and systems administrators alike. As an example, consider a data center operator whose primary function is to interact with your company's data protection application. She needs the ability to log on to servers locally at the console and interact with the backup and restore application, but she doesn't need to create new users or install applications. You implement the principle of least privilege by assigning her Active Directory domain account to the local Backup Operators group on the server systems on which she performs this function. Limiting user permissions and access levels is a best practice for preventing potential disasters.

Desktop administration

Spend less time managing your users' machines by standardizing your office with HP desktop PCs -- long lifecycles, extensive configuration options, high performance, and manageability give you control. HP 7000 series desktop PCs Desktop buying guide

An administrator, when attempting to make implementation easier, might incorrectly add this data center operator's account to the Domain Admins security group, granting her full access to the entire network. Or, the administrator might add the operator's account to the local Administrators group. Although the second configuration is more secure, it doesn't prevent her from making inadvertent changes to the systems she's logging onto. By using the principle of least privilege, thereby keeping the data center operator in the local Backup Operators group on the required systems, you effectively prevent any accidental or internal changes from occurring. Alternatively, consider the scenario of your own network account. Are you a member of the Domain Admins security group? Perhaps you even have Enterprise Admin and Schema Admin permissions, depending on the organization of your IT environment. If you have these elevated permissions, are they assigned to a specialty account that's different from your ordinary user account? If you perform routine, non-administrative tasks using a highly privileged account, you're not following the principle of least privilege. Systems administrators should have two separate network accounts in Active Directory:

Enforcing access controls to servers and data

Because malware runs in the context of the currently logged on user, it'll be less effective in making changes to a computer system if the logged on user has little or no elevated privileges: A compromised network account is only as valuable to the attacker as the level of permissions and privileges it has. By ensuring your accounts with elevated permissions are protected with strong passwords, and by not using the account for everyday work, you can help to protect that account from being discovered and compromised. Next, learn how to plan and configure access controls to enforce security.

An ordinary user account: Use this account to log on to your workstation, check your email and use the internet. A privileged account: This account holds membership in privileged security groups. Use this account when creating new user accounts, modifying a server's configuration and performing other security and managementrelated tasks.

As discussed previously, a systems administrator is responsible for determining the level of access various users and departments have to devices, services and applications on a network. The principle of least privilege should be your guide when making these decisions; however, you must plan, configure and enforce the access controls according to business needs.

Analyzing and planning access controls

You must first analyze your environment to determine what you have to secure and who should have access it. Just as you wouldn't make a data center operator a member of the Domain Admins group, you shouldn't grant a user or group the ability to modify business data if they only need to read it. Consult the employee or department that "owns" the data to accurately determine who needs a particular level of access. If you use NT File System (NTFS) and/or Windows share permissions to control access to data and applications, you should be aware of how they're implemented. Consider the inverse triangle depicted in Figure 2-1, which represents a data access hierarchy for a fictional shared folder, T:\FinanceDepartment\.

As a systems administrator, you should consider what's best for the business from the perspective of keeping server and network systems operating at peak condition and providing uninterrupted access to their data and services.

Understanding parent-child permission inheritance

On a Windows file share, by default, files and folders below the root (children) are automatically set to inherit the permissions from above (parent). Additionally, you must have access to the parent folder to access data contained within a child folder unless you have a direct path via a mapping or Universal Naming Convention (UNC) path to access the lowerlevel child folder. Using the model of a building as a rough analogy, it makes sense to allow everyone into the lobby of the building. You give only some people access to the fifth floor and an even smaller group access to a specific room on the fifth floor.

The use of Windows share permissions aren't recommended as the only means of controlling access to data and services available on your network. Windows share permissions have two drawbacks: they only apply when access is over the network (they don't offer local access protection), and they aren't as granular or fine-tuned as NTFS permissions. A best practice is to use a combination of NTFS and Windows share permissions.

Figure 2-1: Assigning data access must take into account the hierarchy of the data.

To effectively apply access controls to the data contained within that folder, allow the most access at the top of the folder structure. In this example, let's assume that all members of the finance department have read/write access at the T:\FinanceDepartment\ level, and that all finance department employees are also part of a security group named Finance Department. You should use the Finance Department group to grant read/write access.

Moving one level deeper, to the T:\FinanceDepartment\BudgetData\ folder, perhaps only a certain group of employees within the finance department need to make changes to the data or create new data. In that case, you would need to perform three actions: 1. Create a new security group containing the specific employees who need read/write access to the data. Use a descriptive name for the group, such as FIN-BudgetData. The name indicates the group is being used to secure a folder named BudgetData somewhere in the finance department's data store. 2. Turn off inheritance of permissions at the T:\FinanceDepartment\BudgetData\ folder level, being sure to copy the existing permissions rather than removing them. A dialog box should appear, prompting you to make a choice. 3. Reduce the permissions for the Finance Department security group, allowing only read access to the data. After that, add the newly created FIN-BudgetData security group to the folder's access list and grant that group read/write permissions.

In three simple steps, you lock down a child folder (and all of its child folders and files) so that only a certain group of employees within the finance department can make changes or add new data. You can restrict access even further to protect historical data. For example, let's assume you keep annual financial data in folders such as the following: T:\FinanceDepartment\BudgetData\Closed\FY07: Fiscal year closed T:\FinanceDepartment\BudgetData\Closed\FY08: Fiscal year closed T:\FinanceDepartment\BudgetData\FY09: Fiscal year open

The FIN-BudgetData group already has read/write access to the data in the FY09 folder. To prevent inadvertent changes to the data from fiscal years that are closed, allow only read-only access to the Closed folder.

By starting with the least restrictive access at the top of the folder hierarchy and working down to progressively restrictive levels of access, you can accomplish

your goals through careful and judicious use of Windows security groups and NTFS permissions assigned to those security groups. With the "how-to" information under your belt, let's move on to learning about information security groups and the types of tasks they perform. You can often apply the same process and model to user access to network services.

Creating an information security group

You might have an information security analyst or manager as part of your IT staff; however, network security is everyone's responsibility. If you have a security chief, that person should create an information security group whose task is to review significant issues regarding network security and create relevant policies and procedures.

An information security group could be comprised of your entire IT staff or a subgroup of your staff, depending on the size of your department. The senior systems administrator should be a regular member of that team or may choose to receive reports after every meeting. The information security group, if it's a subset of your overall staff, should interface regularly with the entire IT department to advise them of the current security status of the network and address any questions that arise.

Get the help you need to protect your business

Exploring information security group responsibilities


The information security group is responsible for:

Identifying and mitigating risks to your business -wireless or otherwise -- can be time consuming and have an associated learning curve. Business protection services from HP give you both time and knowledge resources that you can use to identify risks and to put the right protections in place. HP Accidental damage protection

This group is your first line of defense if a security incident occurs. If your staff is relatively small, you'll likely wear the security chief hat, and your entire staff will comprise your security group. Although it's desirable to prevent any network intrusion, it's ambitious to assume you can stop them all. Regardless, your information security group should develop the most effective preventive measures available for your network. Basic proactive tasks of your information security group include:

Setting and updating network security policy for users and administrators Establishing, monitoring and testing network security procedures

You should always test any software upgrades or updates in a test environment that's isolated from your production network. After you observe the response of a patch or hotfix and eliminate potential problems, you can update your entire network. If you suspect that an unauthorized member of the IT staff or another department has obtained access to a device, change the password immediately and conduct a review of how the breach occurred. Even a Changing passwords on all network devices and server systems regularly Limiting access to network devices to necessary personnel only

Setting and changing firewall and Simple Network Management Protocol (SNMP) configurations Creating and managing access control lists (ACLs) on network equipment, such as routers and switches, as well as server folder structures Evaluating and installing updated software, particularly security patches

relatively well-meaning person can inadvertently configure a switch or router incorrectly, bringing portions of the network to its knees.

Reacting and responding to intrusions

Security monitoring is similar to network monitoring. Whereas regular network monitoring detects significant changes in operations, security monitoring detects changes that may indicate a breach of network security.

The key to a reactive security measure is quick detection of and response to intrusions. The response is to recover the lost data or service as quickly as possible, and determine the point of entry and correct the situation that allowed unauthorized access. The affected device or systems might have to be shut down to prevent further access until the problem is corrected. You must notify the relevant managers and legal staff, and in some cases, law enforcement. Determine just how much damage was done by reviewing all records of the event, including logs, active user accounts and "sniffer" traces. Log files and other records may contain information about the current incident and a history of similar attacks not previously detected. You might have to limit user accounts or even temporarily disable internet access or access to a specific application or service, depending on the exact nature of the attack and the likelihood of data loss. Also, even if you believe only one area of the network was affected, review all other systems and look for signs of intrusion. Your information security group should respond first and be available 24/7 until the problem is resolved.

Testing and auditing security

Once you've formed a security team and assigned security tasks, you'll need to know how to test any security measures you've put in place. The next section provides techniques for testing and auditing your organization's IT security. Once you implement security measures, you need to test them to determine if they respond as predicted. You should, at a minimum, test network security after you initially establish your system and anytime you make a change. It's better to run tests regularly, even if you haven't recently modified the system. Subtle changes to the network may have occurred over time and had an effect on security.

Secure networking hardware

One important consideration is who will perform security testing. The natural candidates are in your internal information security group. Because they designed and implemented the system, they're well positioned to know what to test. The advantages of using your own group are to save time and money, and to leverage internal (trusted) staff to do the job. A disadvantage is that your team might only test for attacks they anticipate; because of their position, they don't have an outsider's perspective of the network. There are a variety of techniques companies use to test network security; the following sections describe the most common methods.

A truly secure wireless network begins with hardware that supports the latest security options and protocols. Wireless LAN networking solutions from HP offer you the most security options so you can begin with a solid infrastructure. HP ProCurve access point 530

Vulnerability scanning

This is an advanced form of port scanning that not only scans ports and hosts

but also identifies the associated vulnerabilities. This type of scan also attempts to provide a remedy for the detected vulnerability rather than having a technician or administrator interpret the results as they would when performing a standard port scan.

Penetration testing

This form of testing attempts to bypass or otherwise breach the security measures you've put in place, and it can detect previously unknown access points to the network that could be exploited by an attacker. A penetration test is time- and labor-intensive, and great care should be taken to ensure the test doesn't inadvertently cause real damage to systems or data. Perform penetration testing only after considerable planning and approval by senior staff.

Alternatively, you could hire an outside security consultant to perform a security audit on your network. They'll attempt to penetrate your network as effectively as they can and might find vulnerable areas you wouldn't ordinarily consider. Once you receive their report, you can make the recommended changes, providing for a higher level of security. Sometimes, an outside intruder called an ethical hacker might breach some part of your system. When you detect and question an ethical hacker, the person might explain that the purpose of the attack was only to reveal your network's vulnerabilities. Unless you hired this person as a consultant to perform this type of testing, generally consider their actions to be unwanted and illegal.

Although hiring an outside security consultant can be expensive, consider the potential costs of an undetected security hole that results in an intrusion in which all customer records are deleted.

Virus detection

This is a test most often performed on email servers or servers that specifically scan for viruses as traffic enters a network. A complete test of this system isn't always possible because new malicious software is constantly being developed or modified and released into the wild. In a virus detection test, you introduce select viruses to the server to determine if they're detected and isolated, and to ensure the system continues to function. Never conduct this type of test in your production environment. You should perform virus detection testing only in a test environment that mimics your actual email server or firewall. An unsuccessful test on your production system in which the virus goes undetected can result in an infection of your entire system.

File-integrity testing

File-integrity checkers examine files and databases to determine whether unauthorized changes have been made, which can indicate an intrusion or data corruption. Checkers calculate and save a checksum for every file in the system in its database, and checksums can be regularly recalculated to determine whether an unauthorized change has occurred. To effectively use this tool, you must first establish a baseline for the data, which must be secure up at that point. If you establish a baseline for the

integrity checker on data that's been compromised, subsequent test results won't be reliable.

Intrusion detection

Intrusion detection (ID) is a method of testing and monitoring attempts to breach security based on changes in network activity. The changes you monitor are those usually associated with a network attack, as opposed to other changes related to general network performance. Intrusion detection can be host- or network-based. To use host-based ID, install ID software onto the device you want to monitor and then use log files or auditing agents to collect and review data, looking for possible intrusions. Network-based ID monitors traffic on the network-segment level rather than an individual device, looking for patterns that indicate a security breach.

Implementing strong passwords

One strong security measure is to implement and enforce strong password policies on your network. You should have a written policy regarding how to set strong passwords, and then train users how to follow it. In addition, Active Directory provides basic support for strong password policies, which prompts users to create appropriate passwords and change them regularly. If you want more options and multiple password policies, consider a product such as Specops Password Policy from Special Operations Software.

You can also use one or more password-cracking programs on your network to detect users who have set weak passwords. A password-cracking program also verifies that users with sensitive access to network devices and servers have set their passwords to a sufficient complexity that'll prevent them from being easily discovered. Password cracking, however, makes many users uncomfortable, so consider it the last means by which to enforce strong passwords on your network.

Moving on

Assignment #2

In this lesson, you learned basic guidelines and steps you can take to increase and validate the security of your network. In Lesson 3, you'll see how automation of tasks via scripting can make you more efficient and provide predictable, repeatable results. Before moving on, complete the assignment and take the quiz for this lesson.

There are several applications available that help you increase systems and network security. Some are free whereas others are fee-based. To learn more about analyzing systems and network security, and build a security toolkit, explore the following:

Quiz #2

Question 1: True or False: Under the principle of least privilege, all users should be able to perform their jobs using a standard set of permissions and privileges.

Microsoft Baseline Security Analyzer : This popular and straightforward tool enables you to scan your systems to detect security problems. Security Configuration Manager Tool Set : This suite of security tools includes security templates, a configuration and analysis utility and much more. Top 100 Network Security Tools : Use this website to download the most commonly used free and mostly open source security tools, such as Nessus, Wireshark, Snort and more. SANS Institute free tools list : The experts at SANS provide a wealth of free, downloadable security tools.

A) B) A) B) A) B) A) B)

Question 2: Which of the following are legitimate reactive security measures? (Check all that apply.) C) D) Question 3: True or False: You can monitor an entire network segment by installing intrusion detection software on an individual host. True False Notifying the relevant managers and legal staff Disconnecting your network from the internet Contacting law enforcement agencies to report the attack Shutting down a compromised server or system

True

False

Question 4: Regarding access controls, why is it better to configure less-restrictive permissions higher in the folder hierarchy and more restrictive, if needed, lower in the folder hierarchy? C) Files and folders inherit permissions from the parent folder above them; therefore, this method enables you to easily configure the permissions you need.

Introducing scripting

Implementing automation of administrative tasks via scripting

D) Files and folders don't inherit permissions from the parent folder above them; therefore, this method enables you to easily configure the permissions you need. One thing most network administrators have in common is a lengthy list of tasks that must be performed repeatedly. In this lesson, you'll examine different types of scripting to tackle these sorts of tasks. The installation of additional server systems to maintain The increasing complexity of the applications and services these systems provide

You can't enable security groups higher in the hierarchy.

You can't enable restrictive permissions at a high level in a folder hierarchy.

Welcome back to the class. Whether an organization is large or small, many systems and network administrators see their workloads increase each year regardless of the number of other administrators on staff. This is usually due to: Two examples of increasing application complexity over time are Microsoft Active Directory (as it evolved from Windows NT Directory Services) and Microsoft Exchange Server (as it evolved through various versions). Many techniques are available to properly manage these and other complex applications. This lesson covers one of the most beneficial: scripting. Scripting enables administrators to work more efficiently and save time. In addition, tasks completed through the use of a properly written script are repeatable, eliminating common errors due to the human factor. However, scripting is a powerful tool that must be implemented correctly to produce the desired results.

You can open a script with any text editor, such as Notepad. It's easy to see what's going inside of a script if you understand the language.

What's in a script?

Scripts, in their simplest form, are flat text files that contain a series of commands to be executed in a predefined order, much like batch files in MS-DOS and Windows 9x. This lesson examines the three most

commonly used scripting languages in the Windows networking arena: Visual Basic Scripting Edition, Windows PowerShell and KiXtart. Each scripting language has unique characteristics, and thus each is suited to some tasks more than others. You'll explore common usages for each scripting language; from there you can examine your environment and find some tasks you'd like to start scripting. Don't test scripts in a production environmentit's safer to try new scripts in a test environment whenever possible. If you must run an unproven script in your production environment, ensure it contains no obvious errors, and create a special user account in Active Directory to test all scripts.

Knowing when to script and when not to script


How often will I perform this task?

As mentioned, scripting is only one method of managing complex environments. The next section helps you decide when to use scripting or another technique.

Too much of a good thing can turn bad, and scripting is no exception. Although you can try to script nearly every task you must perform, you should determine whether it's practical. The guidelines used to make that determination are unique to every organization, but the following questions can help you make the right decision. If you perform the same type of task daily or multiple times a day, the task is a good candidate for scripting. For example, in a larger network environment, creating new Active Directory user accounts and disabling Active Directory accounts are amenable to scripting.

How complex is the task to be performed?

Because one of the main benefits of a properly written script is the repeatability of the steps being performed, complex tasks tend to lend themselves to scripts. Complexity, in this sense, refers to the number of steps to be performed to complete the task and the level of complexity of the task. 1. Create many IP subnets. 2. Assign them to their corresponding Active Directory sites. 3. Create the required site links.

An example of a set of complex tasks is to create a large Active Directory site and subnet arrangement for a multinational company with numerous geographic locations. The general steps would involve: The script to accomplish these tasks would contain relatively few lines of code and a few arrays to store input data. Although the tasks can be performed manually, it's more efficient and accurate to script the process. With every script you write, you'll learn more about scripting. Additionally, you'll often get more insight into the application or service for which you're writing the script.

Although this type of script might be used only once, it would result in an Active Directory site and subnet configuration that's implemented correctly without the need to track down a random typo in the Active Directory Sites and Services console.

Are the steps to be completed always the same?

As a good example, when you create new Active Directory user accounts in your organization, you're completing the same steps for every account created. The names and departments change, and their group memberships may be different, but you're performing the same general steps repeatedly with different input values. You can use scripting in this situation to quickly create accounts, and then devote your attention to more pressing needs.

How much time does it take to perform the task manually?

The answer to this question might best be answered by determining how often the task will be performed. Consider the example of finding and unlocking an Active Directory user account. The entire task takes about 30 seconds if the Active Directory Users and Computers console is already open, and about one minute if you need to open the console first. Is this task worthy of scripting? If you unlock only a few user accounts each week, you probably shouldn't invest the time to script the task. However, if you're providing a tool for help desk personnel who might need to unlock 40 user accounts a day, a small script to unlock a user account based in the sAMAccountName value (the login name) would be valuable.

How long does it take to write and test the script?

You generally shouldn't invest the time to write and test a script if one or more of the following are true: In every situation, remember that script writing is a method of saving time and effort, and ensuring accuracy of tasks. However, you should never devote so much time to script writing that your regularly assigned tasks and duties fall behind schedule. The net gain is small. The steps aren't repeatable. The script won't ensure accuracy when performing complex tasks.

Understanding VBScript basics

Now that you're armed with some criteria to help you determine when or when not to script, you're ready to examine the three scripting languages you're most likely to work with, starting with Visual Basic Scripting Edition. Visual Basic Scripting Edition (VBScript) is the ideal starting point for scripting projects and is readily available: it has shipped with every version of Windows since Windows 98. VBScript is a robust, mature scripting language with thousands of websites and dozens of books available for reference.

Creating a simple script with VBScript

Many administrators are unsure where to start with their first script, so let's use a simple example: To create and run this script:

WScript.Echo "Hello World!" WScript.Echo "Brought to you by VBScript."

1. Copy it to Notepad or any plain text editor. 2. Save it as hello.vbs. 3. Open a command window. In Windows Vista, for example, click Start, type Run in the Start Search text box, click Run in the Programs list that appears, type cmd in the Open text box and then press Enter. 4. In the command window that appears, navigate to the folder in which you saved the .vbs file. 5. Type cscript hello.vbs.

Figure 3-1 shows the results of the hello.vbs script.

Throughout this class, you're provided with sample scripts to download. Right click the hyperlink to save the file to your computer. Be careful to save your script files with the correct extension, .vbs in this case.

Figure 3-1: The results of the hello.vbs script. Enlarge image

You can execute a VBScript in two different ways. As you've already seen, you can execute a script from the command interpreter using the cscript command. This invokes the console-based script host. For scripts that require more than one input or produce more than one output to screen, using the cscript method is usually best. Alternatively, you can execute a VBScript by double-clicking it to invoke the wscript.exe executable, which is the Windows-based script host. When you use wscript.exe, all input and output is displayed to the screen via standard Windows dialog boxes. So far so good, but you'll probably want to do something more constructive with your VBScripts. If your script produces a lot of output, using wscript.exe can be annoying to have to click through each dialog box separately.

Creating a more advanced script with VBScript

Let's look at a script named UnlockAccount.vbs that unlocks a user account (the first two lines are actually one line that wraps, as denoted by the space and underscore): At first glance, this bit of script code can be difficult to decipher, but let's break down each of the lines into their separate steps:
Set objUserAccount = GetObject _ ("LDAP://cn=Sample,ou=Sales,dc=mycorp,dc=com") objUserAccount.IsAccountLocked = False objUserAccount.SetInfo

As with all LDAP binding strings, you work from right to left to get from the highest level of the LDAP tree to the lowest levelin this case, the actual user object you want to unlock.

The first line (remember, it wraps) tells the Windows Scripting Host (the automation environment in which your VBScript scripts are executed) that you want to make a Lightweight Directory Access Protocol (LDAP) bind to the user account object for an employee named Sample. The Sample employee account is located in the Sales organizational unit of the mycorp.com Active Directory domain.

The script in this example isn't all that useful because the employee information is hard-coded into the script. You need a script that does the following when run by a help desk employee:

The second line sets the locked out status to False, which means the account is to be unlocked. The third line is the hammer that hits the nail in a sense; it takes the changes you've made out of a cached workspace and commits them back to Active Directory. Until you execute the command in the third line of the script, no changes are actually made to the user account.

The script example shown in Figure 3-2, called PromptUnlockAccount.vbs, accomplishes your goals. This isn't something you'd be able to create on your own when you're first getting started with VBScript, so don't despair!

1. Prompts for the sAMAccountName of the employee in question 2. Searches Active Directory based on the input provided 3. Takes action on the results of that search

Figure 3-2: Advanced script that prompts for a user account name. Enlarge image

If you divide the script into four main sections, you can analyze what the script is doing and how it works to accomplish the desired result:

You can see how the if/then statement in this script breaks up the processing order, potentially ending the script before it reaches the end of the script file. If/then statements are powerful logic operators you can use anytime you need your script to make a decision on how to continue its execution.

Exploring PowerShell fundamentals

Note that this example script doesn't check whether the account is locked or notthat would take a much longer script. With our examination of VBScript behind us, let's move on and take a brief look at PowerShell. PowerShell is a relatively new scripting language that was known as Monad for several years during its development stages. Initially introduced as the primary means to script for Microsoft Exchange Server 2007, PowerShell is now being phased in as the preferred scripting language for all new Microsoft products. Other examples include Microsoft Windows Vista, Microsoft Windows Server 2008, Microsoft System Center Operations Manager and System Center Data Protection Manager, among others.

The first section shown in Figure 3-2, lines 1 through 3, sets some initial conditions and provides constants that assign a value to a human friendly variable name. The second section, lines 5 and 6, requests input from the user and stores that input as a variable. The third section, lines 8 through 24, creates and executes a directory search against Active Directory to locate the user account with the sAMAccountName provided. The last section, lines 26 through 37, unlocks the specified account, provided that the account name entered previously is valid and found.

You can download PowerShell Version 1.0 from the Microsoft TechNet website. If you're installing it on Windows XP Service Pack 2 (SP2) or Windows Server 2003, you need to first download and install .NET Framework 2.0. For Windows Server 2008, you can install PowerShell 1.0 from the Server Manager.

Comparing PowerShell to other scripting languages


PowerShell is uniquely different from VBScript in two key ways:

As an example, Exchange Server 2007 includes the Exchange Management Shell (EMS), a specialized instance of PowerShell that can natively interact with Exchange. These application-specific extensions of PowerShell are accomplished by using special snap-ins to the base implementation of PowerShell. In that sense, the EMS is simply calling the Exchange Server 2007 snap-ins to start as part of starting the EMS itself. These application-specific plug-ins don't take away from PowerShell or change its base functionality in any waythey simply act to extend the capabilities of PowerShell.

All commands have a common implementation method and name. Application developers who are internal to Microsoft and at third-party companies can freely produce extensions to PowerShell that provide PowerShell management for their applications.

PowerShell can open a whole new world of scripting possibilities for you. Working with PowerShell, though, is different than working with VBScript and KiXtart (which you'll examine on the next lesson page). The two primary differences are:

Running commands interactively

The following code is a basic PowerShell script. You can run the commands directly from the PowerShell command interpreter, or save them in a script file with a .ps1 extension: The results of the script, saved as hello.ps1, are shown in Figure 3-3.
Write-Output "Hello World!" Write-Output "Brought to you by Windows PowerShell."

You can run your PowerShell commands interactively from within the PowerShell command interpreter. You can, but don't have to, save your commands in a PowerShell script file first. All commands, which are called cmdlets, consist of standard verb-noun pairs.

Figure 3-3: Results of the hello.ps1 script. Enlarge image

Understanding cmdlet syntax

Each PowerShell cmdlet follows a consistent verb-noun pair pattern, such as Write-Output. The Help file in PowerShell describes how individual cmdlets work. You can get help on any cmdlet by using the GetHelp cmdlet command, such as Get-Help Write-Output.

By default, PowerShell doesn't allow the execution of any scripts that are not digitally signed, due to security restrictions. You can change this setting using the Set-ExecutionPolicy RemoteSigned command, which specifies that scripts generated on the local computer can be run with no restrictions, but scripts from other sources must be signed.

Examining more advanced PowerShell scripts

The following PowerShell code determines the free space available on all of the hard drives in the local computer. The back tick symbol (`) is a line break character in PowerShell, so this command is actually a single line although it appears to be three lines long: In this script (or command if entered directly into the PowerShell command interpreter), the first action performed is to use a WMI query to get a listing of local hard drives in the computer. The results of that query are then piped via the pipe character (|) to be output using the second part of the script. Running the script on your computer should generate output similar to the following:
Get-WmiObject win32_logicaldisk -filter ` "drivetype = 3" ` | % { $_.deviceid; $_.freespace/1GB }

You can use PowerShell to accomplish almost any task you want to automate, although version 1.0 does have some weak points. For example, PowerShell scripting of Active Directory can be difficult, often requiring you to use native Active Directory Services Interface (ADSI) or .NET coding techniques to accomplish your desired outcome. You can use the default method, using only the native capabilities available to you and thus ensuring your PowerShell scripts can run without any add-ins, or you can download and use PowerShell extensions, such as ActiveRoles from Quest Software, to make your Active Directory scripting easier.
C: 70.8117752075195 K: 417.009613037109

As an example of what PowerShell can accomplish natively, consider the problem of setting the NTFS ownership attribute on a user's home folder and its contents. To accurately enforce disk space quota settings or perform accurate Storage Resource Management (SRM) reporting on the user's disk space utilization, the correct ownership attribute must be set. Although you could manually set the ownership information on each user's home folder in your organization, using a script is a better solution. The SetHomeFolderOwners.ps1 script, shown in Figure 3-4, sets the home folder ownership information for you with a simple input file that provides a list of share paths in UNC format, such as \\server1\departmentA.

The Set-HomeFolderOwners.ps1 script assumes the home folder name is the same as the sAMAccountName (login name) for each user. This is the default in many organizations. Don't attempt to run this script without modifications if your organization uses a different convention.

Figure 3-4: The Set-HomeFolderOwners.ps1 script. Enlarge image

If you divide the script into five main sections, you can analyze what the script is doing and how it works to accomplish the desired result: The first section shown in Figure 3-4, lines 1 through 4, reads in the input file with the list of department share paths. The second section, line 6, creates an array of child folder paths based on the input folder path given. The third section, lines 7 through 24, creates another array of child folder names. The array contains only the name of the individual folders that would correspond to the users' home folders in the department share. The fourth section, lines 26 through 50, assigns the NTFS ownership attribute to the home folder and its contents. It uses the name of the folder itself as the sAMAccountName (user name) to assign as the owner. The fifth section, line 53, loops back to the top of the script to begin work again on the next department share path listed in the input file.

Examining KiXtart essentials

Note that this example script doesn't check whether the Active Directory user account matches the child folder name. For example, if the full path is \\server1\departmentA\user42, the script doesn't check if user42 is a valid sAMAccountName. If the folder name doesn't match a valid sAMAccountName, the script issues an error report and continues processing. Now that you've seen some PowerShell examples, let's take a brief look at KiXtart scripting.

KiXtart is the oldest of the three scripting languages discussed in this lesson, having gotten its start in 1991. At that time, logon scripting in Windows LAN Manager networks didn't exist, so a Microsoft engineer created KiXtart as a side project to provide a solution. Almost 20 years and many revisions later, KiXtart is more popular than ever and continues to be the easiest and most preferred scripting language for creating logon scripts. One of the primary reasons to implement logon scripts in your organization is to standardize network mapped drives on workstations at the time of logon. KiX scripts, which have a file extension of .kix, are usually called by a batch file (.bat) during the logon process. The following script example is a logon batch file named CallKiX.bat, which calls a KiX script to map network drives: You can download KiXtart from the KiXtart website. Once downloaded, place the extracted files in your domain's NETLOGON share, keeping the folder structure intact.

@echo off REM Remove shares net use /d * /y REM Assign Shares \\YourNetBIOSDomain\NETLOGON\KixLocation\kix32.exe share_mappings.kix REM Set Time net time /Domain:YourNetBIOSDomain /set /y REM Quit! EXIT

The following KiX script, named MapDrives.kix, actually maps the drives. It runs quickly and makes decisions based on Active Directory group memberships that can't be done this easily in a batch file:

;;; Map S drive for everyone. use S: /delete use S: "\\server02\ShareFolder" ;;; Map department shares by security groups if ingroup ("YourNetBIOSDomain\FIN - BudgetData") use T: /delete use T: "\\server01\FIN$\BudgetData" EndIf if ingroup ("YourNetBIOSDomain\HR - Training") use Q: /delete use Q: "\\server03\HR$\TrainingDocuments" EndIf

Although you can use a VBScript file as a logon script, using KiXtart is recommended. On a small percentage of Windows workstations in production, changes sometimes occur that render the workstation incapable of properly executing VBScript. If you use KiX scripting for your logon scripts, you can be assured they'll always execute properly regardless of anything that's changed on the workstations.

The next KiX example, named GetDiskSpace.kix, quickly tells you how many gigabytes of free disk space exist on drive C in your computer. The possibilities to modify and extend the script are bound only by your imagination and needs: Figure 3-5 shows the output you can expect from this KiX script.
$SpaceResult = GetDiskSpace( "C:\" ) $SpaceResult = $SpaceResult/1024 MESSAGEBOX ("$SpaceResult", "Free Space (GB)", 64)

Figure 3-5: A simple KiX script.

Moving on

Assignment #3

In this lesson, you learned the basics of three common scripting languages used by network administrators responsible for Windows server systems. Additionally, you examined some actual scripts you can use in your organization or as the basis for writing your own scripts. In Lesson 4, you'll get an introduction to network monitoring and why it's important to your organization. Before moving on, complete the assignment and take the quiz for this lesson. There are thousands of scripting resources available to you on the internet and in printed

format. Review these resources now, and bookmark the web addresses for future reference: Microsoft TechNet Script Center : Offers numerous resources including multiple script repositories for VBScript, PowerShell and other scripting languages Microsoft TechNet Script Center Script Repository : A direct link to the repository KiXtart.org : The official home of the KiXtart scripting language Admin Script Editor (ASE) Script Library : A searchable site of VBScript, PowerShell and KiXtart scripts, among others The Script Library : Another source of free example scripts

Quiz #3
A) B) A) B) A) B) A) B) C) D)

(Optional) After browsing these scripting resources, write a simple script that accomplishes something you need done on your network. Question 1: Which methods of execution are available for VBScript files? (Check all that apply.) wscript.exe vscript.exe sscript cscript

Question 2: True or False: You can execute Windows PowerShell cmdlets in a script or individually in the PowerShell command interpreter. True False

Question 3: Of the scripting languages examined in this lesson, which are included by default on a Windows Server 2003 server system? C) D) C) D) Question 4: Of the scripting languages examined in this lesson, which one always uses a standard verb-noun pair for all commands? VBScript KiXtart PowerShell None of the above VBScript KiXtart PowerShell None of the above

Why monitoring is critical

Implementing a proactive network monitoring infrastructure


Welcome back to the class. You might be familiar with the popular management quote, "If you can't measure it, you can't manage it." As it relates to systems and network administration, a better derivative might be, "If you can't monitor it, you can't manage it." This means the core of proactive management is the ability to monitor your systems to understand how they're performing and when they're experiencing issues you must address.

In today's complex and fast-growing IT infrastructure, you must have a proactive monitoring system in place. In this lesson, you'll examine what you can do to stay informed of your systems' status and prevent service interruptions.

Many SMB environments have dozens or hundreds of server systems, server appliances and other network-related hardware devices. Each type of system serves a different purpose and thus provides different services to your end users. In the past, when SMBs generally ran fewer systems, you could visually inspect the IT environment each day to check for amber lights or other symptoms of problems. That simplicity no longer exists except in the smallest installations. If a system isn't performing well because of over utilization, or if a required service is no longer responding to client requests, you can't provide what your end users need to perform their jobs. For example, a database server with too many client connections affects the performance and usability of the applications that depend on it. If the Exchange Store service on your Microsoft Exchange Server email system stops, your end users will be unable to access their mailboxes. These types of issues are easily solved with proactive rather than reactive techniques.

However, you occasionally have to respond to a situation in reactive mode, which usually requires immediate attention. Consider one of the most common warranty service items: the failed server hard drive. In today's server systems, it's common to routinely protect important or critical data with redundant arrays of disks so that a single hard drive failure doesn't bring down the entire system. Wouldn't it be most efficient to know immediately when a hard drive fails, and take proactive steps to replace it as soon as possible and potentially avoid the failure of another hard drive in the same array? The same principle applies to a power supply or fan, both critical components in the proper cooling process for a server system. Being proactive and working to head off small problems before they become large problems is what monitoring is all about. Now that you understand why proactive monitoring is critical to the success of your IT environment, the next area to examine is how you'll determine what to monitor. Deciding what, exactly, to monitor isn't always an easy task. Some administrators want to monitor everything that occurs on every system, whereas other administrators monitor only the most critical items or, sometimes, the items that are the easiest to monitor. Often, the best solution falls somewhere in the middle. Just as it's not practical or efficient to monitor everything, it makes little sense to monitor only a small subset of items. You can use proactive monitoring to indicate when you need to be reactive to keep a system running in optimal condition.

Which systems and processes to monitor

You need to monitor enough items to get an accurate picture of what's going on in your network environment. Monitoring too little prevents you from seeing what you need to see, and often leads to reactive measures. Monitoring too much results in information overload, disregarding alerts without the required investigation. The middle ground varies within each organization. As you'll learn on the next lesson page, what you monitor, and how you monitor it, is sometimes controlled by your monitoring solution. However, the following two sections list items that most businesses should monitor. The lists are not all-inclusive, nor do they apply to every business type and size, so pick and choose what you need or add items not listed here.

Basic monitoring

These items are the basic foundation of information about your network and are commonly monitored:

Specialized monitoring

These items should be monitored in most environments, the exceptions being very small environments or those in which the cost and staff required to do so would present a hardship:

System up/down status via PING (applies to server systems, network components such as routers and other devices on the network) Service status (running or stopped) Disk free space (always a good thing to keep track of)

How to implement monitoring


General-purpose network monitoring Hardware-specific monitoring Application-specific monitoring

Now that you've got a general idea of what you need to monitor, move on to the next section to pick up tips for implementing a monitoring solution. When you start implementing a proactive monitoring solution, it's helpful to break up monitoring into three different groups:

Hardware status: Includes disk failures and power supply failures Application performance: Includes transactions occurring at the desired rate or below Process performance: Includes all steps in a complex multi-server process occurring as required System changes: Includes changes being made to a monitored system Scheduled job/task status: Includes jobs being completing or not completing

This division of monitoring makes the most sense to many organizations due to the nature of the items to be monitored and the type of monitoring solutions available. The following sections examine each of these monitoring types. Although this lesson splits monitoring into three distinct sections, you can often monitor multiple areas with a single monitoring application. As an example, most hardware monitoring applications provide general information about the status of network hardware; that is, whether it's responding to a PING request or not. Don't feel compelled to obtain multiple monitoring applications unless you have a valid need and implementation path for them. Similarly, don't waste time and effort with a single monitoring application when it's obvious you need more than one.

HP Systems Insight Manager (SIM)

HP SIM is a free download from HP. You can use SIM to monitor and manage all types of HP server systems that run the Microsoft Windows or HPUX operating systems. HP Systems Insight Manager

General-purpose network monitoring

Almost every organization performs general purpose network monitoringit's usually the least expensive and easiest form of monitoring to perform. Although there are more complex monitoring solutions available, such as hardware- and application-specific monitoring, general network monitoring still has a valuable place in the organization. Consider an organization with an operations office that's staffed around the clock. In this office, a systems operator is responsible for answering the help desk phone after hours, monitoring the status of backup jobs overnight and monitoring a large liquid crystal display (LCD) that shows the status of the equipment in the data center. The application providing the display on the LCD makes it easy for the system operator to determine if a system is OK (displayed in green) or is experiencing an issue (displayed in red). When a

system stays in a red condition for more than 15 minutes, the systems operator calls the responsible administrator to troubleshoot and resolve the problem. A general network monitoring application can easily provide this simple and easy-to-understand picture of what's going on in the datacenter. Network administrators can also benefit from general network monitoring as a quick-and-easy tool to get the status of their equipment in the data center. Sometimes it's easier to look at a simple dashboard and see green, or the absence of red, and know that things are okay. Two popular general-purpose network monitoring applications are: SolarWinds ipMonitor Big Brother Software

Each of these general-purpose network monitoring applications can monitor multiple types of equipment, including server systems and network devices, such as switches and routers. Additionally, they have role-based access, dashboards and various views for easy problem determination and fully featured alerting via email and other methods. These monitoring applications can provide monitoring on service status and disk space, as well as events that are written to the event logs. Most organizations start their monitoring efforts with one of these or a similar application. Even if you begin with hardware- or application-specific monitoring in your organization, don't forget about general-purpose network monitoring. These products are usually inexpensive and provide a wealth of useable information to IT department employees at all technical and skill levels. Some of the hardware and application monitoring solutions require complex configurations, and produce complex, detailed alerts that aren't always suitable for every IT staff member.

Hardware-specific monitoring

Hardware-specific monitoring is sometimes overlooked by organizations but is important in gaining an overall picture of your network environment. Most major server system vendors provide a license to use their specific hardwaremonitoring application when you purchase a server system. When selecting a server vendor, closely review and compare their monitoring applications to ensure you'll get the functionality you need. The following are desirable features of a comprehensive hardware-monitoring application: Some hardware monitoring applications enable you to monitor vendor A's server systems in vendor B's monitoring application. If you have the time and resources available, it's usually best to maintain a hardware monitoring application from each of your hardware vendors. Another option is to purchase an independent third-party hardware monitoring application that can monitor multiple vendors' equipment.

Native hardware monitoring capabilities for multiple hardware types, including servers and storage systems Consistent look and feel across different hardware platforms and operating systems Automatic discovery of eligible systems Warranty and contract support management Role-based security Version control to help maintain server systems' firmware and other applications up to date Automated alert notification when a critical situation occurs, such as failure of a hardware component Flexible fault and event handling that allows for multiple actions to be taken when an alert condition occurs, such as sending to email or running a script Secure communication between management station and managed

Application-specific monitoring

Application-specific monitoring enables you to monitor the operation, health and availability of applications on your network. Examples of applications that organizations should closely monitor include email, databases, Active Directory, Research in Motion Blackberry Enterprise Server and Microsoft IIS, among others. Monitoring of applications, however, is significantly more complex than monitoring hardware or the network. You have to know which key processes, services and indicators to monitor, and how to compare them against acceptable values. Although many applications have some rudimentary monitoring capabilities built in, you might have to purchase a more advanced application to perform the appropriate level of application monitoring.

systems Plugs-ins for other components, such as storage management or image deployment Easy, wizard-based installation that helps you make the right installation and configuration decisions

Microsoft System Center Operations Manager 2007

On Microsoft Windows networks, a good application monitoring product is Microsoft System Center Operations Manager, which is part of the Systems Center Suite. After installation, you can configure Operations Manager to monitor specific applications through the importing of management packs, which can be freely downloaded from the Microsoft website.

Some of the applications Operations Manager is capable of monitoring (with free management packs) include:

Operations Manager includes the following specific features, among many others:

Operations Manager combines all three monitoring types into a single interface and alerting system, which is critical in a growing environment.

Windows operating systems Exchange Server, Microsoft SQL Server and IIS Active Directory, DNS, DHCP and Windows Internet Name Service (WINS) Microsoft Office SharePoint Server and Windows Terminal Services Microsoft Data Protection Manager Windows Group Policy Major vendors' server and storage systems

You can get more information about Systems Center Operations Manager 2007 on the Microsoft website.

Tools4Ever Monitor Magic

Automated alert notification when a monitor exceeds a threshold Role-based security Multi-tiered architecture that suits any size organization Monitoring definitions that can be tuned and overridden, as required General network monitoring functionality, such as system up or down status Extendable through the installation of free and for-a-fee management packs for hardware and software (Microsoft and third-party)

Another excellent, although less featured and less costly application monitoring tool is Monitor Magic from Tools4Ever. Like Operations Manager, Monitor Magic provides general monitoring, hardware monitoring (in limited fashion)

and application-specific monitoring (limited to a handful of applications to monitor). For a small organization on a tight budget, Monitor Magic is definitely worth looking into. Monitor Magic includes the following features, among others: Automated alert notification when a monitor exceeds a threshold General network monitoring functionality, such as system up or down status Limited support for Linux and Unix systems and other network components General network monitoring functionality, such as system up or down status

You can get more information about Monitor Magic on the Tools4Ever website.

There are many other sources of free or low-cost application monitoring solutions available on the internet. Some of the better free ones are Windows Sysinternals and tools by SolarWinds.

Assignment #4

Moving on

In this lesson, you learned about system and network monitoring. In Lesson 5, you'll learn how to manage and maintain your organization's critical email system. Before moving on, complete the assignment and take the quiz for this lesson.

Lesson 4 examined several types of monitoring applications for SMB environments. For this assignment: 1. Review your network's general operations to determine the areas in which you can improve your monitoring capabilities. 2. Research and compare the solutions described in Lesson 4 with other solutions you learn about on the internet, and then decide which solution or solutions you should implement.

Quiz #4
A) B) A) B) E) A) B)

Question 1: True or False: Proactive monitoring can help you determine when a server system isn't providing the required level of service to customers. True False

Question 2: Other than basic performance metrics and status indicators, which of the following are recommended that you monitor in your IT environment? (Check all that apply.) C) D) Hardware status Application performance Process performance System changes Scheduled job/task status

Question 3: Which type of monitoring solution generally provides an easy-to-read dashboard view that's ideal for most network personnel to use? General-purpose network monitoring Hardware-specific monitoring

C) D) A) B)

Question 4: True or False: If you implement an application-specific monitoring solution, you don't need a hardware-specific monitoring solution.

Exploring regulatory controls that impact email systems

Managing email services and utilization


True False

Application-specific monitoring None of the above

Email is a vital service to network users, thus it's a critical system you need to maintain and keep operating in optimal condition. In this lesson, you'll examine ways to manage, protect and secure this vital business resource. Welcome back. In Lesson 4, you learned how to implement a proactive network monitoring infrastructure, which includes real-time system status notifications that prevent service interruptions. This lesson shifts the focus to email systems, offering best practices for management and protection.

Many vital issues are involved in managing and maintaining an email system, including a myriad of tasks an IT department must perform to ensure continual end-user access to email. Management of your email system isn't just a matter of technical complexity but of regulatory law as well. Depending on your industry sector, your business might be subject to one or more federal and/or state regulations that impact the operation and use of your email system. Some of the more common federal regulations are described in this section.

17 CFR 240.17a-4

17 CFR 240.17a-4, commonly referred to as SEC Rule 17a-4, falls under the Securities and Exchange Act of 1934. This specific rule was added in 1997 to include the storage and retention of electronic records, such as email, by stock exchange members, brokers and dealers. Under the stipulations of this federal law, records must be maintained for at least six or three years (depending on the record type) with the most recent two years of records being "easily accessible."

Don't use the information presented in this class as your only resource when planning your organization's regulatory controls. The regulatory information provided here is for reference only and illustrates the impact on IT departments.

The Sarbanes-Oxley Act of 2002

The Sarbanes-Oxley Act of 2002, also known as Public Law 107-204 or SOX, was enacted in the United States in July 2002 as a direct response to an alarming number of major corporate accounting scandals that shook the U.S. financial markets. Some of the fraudulent companies that were highly visible in the news included Enron, WorldCom and Tyco International. The losses from these scandals cost their investors billions of dollars and greatly affected the public's confidence in the securities markets as well as big business in general.

You can view the full text of 17 CFR 240.17a-4, and the related 17 CFR 240.17a-3 which defines record types, by visiting the U.S. Government Printing Office website. In addition, this PDF document provided by Kahn Consulting is a good overview of 17 CFR 240.17a-4.

After the initial enthusiasm and excitement wore off, companies were left to figure out exactly what the Act meant to them. This proved to be a challenge, due largely to the vague language of Section 404.(a) of the Act, as follows.

(a) Rules Required.--The Commission shall prescribe rules requiring each annual report required by section 13(a) or 15(d) of the Securities Exchange Act of 1934 (15 U.S.C. 78m or 78o(d)) to contain an internal control report, which shall (2) contain an assessment, as of the end of the most recent fiscal year of the issuer, of the effectiveness of the internal control structure and procedures of the issuer for financial reporting.

(1) state the responsibility of management for establishing and maintaining an adequate internal control structure and procedures for financial reporting; and In summary, the Act requires strict accounting, reporting and certification rules regarding the financial details of a publically traded company. The requirements of Section 404.(a) of the Act are fully felt by IT departments large and small, given the widespread use of email and other electronic media systems as a business tool for communication.

The Health Insurance Portability and Accountability Act of 1996

The Health Insurance Portability and Accountability Act of 1996, also known as Public Law 104191 or HIPAA, helps protect health insurance coverage for workers and their families and standardizes the use of electronic record keeping in the U.S. medical community. The Act has two main parts that work in conjunction but address specific aspects of insurance, healthcare and protected health information: Title I: Addresses the availability of group and some individual health insurance plans, and might best be known for increasing insurability of many Americans with pre-existing conditions, especially during job loss or change Title II: Directly impacts IT departments by governing the use and transfer of healthcare information, known as protected health information (PHI), and specifically electronic protected health information (EPHI)

You can view the full text of the Sarbanes-Oxley Act of 2002 by visiting the U.S. Government Printing Office website. Scroll down until you locate "Pub.L. 107-204." In addition, Secure Computing offers an article that provides some insight into how Section 404.(a) of the SarbanesOxley Act of 2002 impacts email specifically.

The language in HIPAA that addresses security of EPHI covers the technical capabilities of record systems used to maintain health information, the costs of security measures, training for persons who have access to health information and the need for audit trails. In addition, HIPAA legislates safeguards that ensure "the integrity and confidentiality of the information" by protecting against threats or hazards, and unauthorized use or disclosure. 45 CFR 164.502, issued by the U.S. Department of Health and Human Services and revised in October 2007, specifies minimum reasonable efforts that healthcare organizations and their IT departments must take to protect health information. The provisions of HIPAA stretch far beyond email systems; however, given the relative ease of transmitting information via email, HIPAA directly impacts email systems in healthcare organizations. You can view the full text of HIPAA and 45 CFR 164.502 on the U.S. Government Printing Office website. This PDF document details how the University of California enacted controls in response to the requirements of the Security Rule in Title II of HIPAA.

Examining technical issues that impact email systems

Now that you understand the essentials of key regulatory issues that impact email, let's examine some of the technical issues faced by systems and network administrators.

The essential technical requirements of any email system is to enable users to send and receive email; however, storing, deleting, archiving and re-accessing stored or archived emails is also important. The following sections briefly describe issues involved with email system management.

Storing email

Even in SMB environments, the demand for email storage capacity is considerable. The typical volume of email storage is now in the gigabyte to terabyte range, depending on the size of your organization and the retention requirements in place. Unfortunately, greater storage space means greater expense, and you face the usual conflict of need versus cost. Because you don't have an infinite amount of space available to your email system to store messages, how do you manage this problem?

Deleting email

Email has a habit of multiplying at an alarming rate. Although business email represents formal corporate records, the content can be anything from a sales proposal to the latest joke buzzing around the internet. In other words, every message in a business email system probably isn't necessary to retain. A common solution to a burgeoning number of email messages is to limit the amount of space each end user may use within the email system, and to send users warning messages as they near their allotted limit. This enables users to select which emails they no longer need and to clean out their folders.

You might encounter a few users who, for various reasons, either refuse to delete unneeded emails or just neglect to tackle the job. To counter this problem, you can create a policy that states that emails in accounts that are full will be deleted automatically or manually by IT staff. Consult with your HR and legal departments, as well as other key administrators, before enacting a policy that allows for email deletion by anyone other than the end user. Because emails are official records, randomly deleting such records could cause business and legal complications for the company and the end user.

If your company decides not to implement a deletion policy, at the least, inform your users that once the limit is reached, emails that are sent to them will bounce back to the sender, which can cause confusion, delays and frustration. Your company's clients and other associates can get a particularly negative impression of your company if they must endure repeated email bounce-backs.

Archiving email

Rather than deleting emails to make room on the server, you can require that users archive older emails when their storage space is nearing the limit. Microsoft Outlook, for example, offers this option to end users. Archiving via Outlook removes selected email messages from the email system and stores them on the user's hard drive in .pst format. This option is usually voluntary though, and you might encounter some users who decide not to archive their email. Archiving email to local files, such as Outlook's .pst files, can be counterproductive if you intend to implement a companywide email archiving solution. If email archiving is to be used effectively, you need all archived emails in the single archiving system rather than scattered over hundreds or thousands of workstations and network file shares.

Another option is for you to configure your email system to transfer all emails older than a certain date to an email archiving system to free up space within the email system. If certain emails need to be retrieved, they can be restored to the email system from the archiving system.

Ultimately, the most reasonable solution to your storage space dilemma is to combine the deletion of unneeded emails and archival of required messages.

Accessing stored or archived email

Storage and access are two sides of the same coin. It's difficult to address one without mentioning the other. Your users have to be able to retrieve their stored emails from the system to read and manipulate their messages. This isn't the same as a user's ability to check for new mail.

There are times when it's vital for a user to locate and open a particular email that's already been read. You must have the ability to organize and control how stored emails are accessed, particularly in an important business transaction or in response to a subpoena or court order. Failing to do so can result in significant cost to your company and damage to its public image, causing further financial impact. As the number of stored emails grows, and the places in which they're stored become more widespread, locating a particular email can be difficult. Email retrieval really comes down to two options: leaving the email in the email system or moving some of the email to an archiving system.

Using your email system

The simplest solution, in one respect, is to store all company email in your email system. The end users can access their email, regardless of age, and the information is searchable by header or body content, making individual messages easy to locate. Although this might appear simple and easy, the drawback is that your email system must always have adequate storage space, which grows exponentially every year. Even if users judiciously delete unneeded emails, this isn't always a practical solution.

Using an archive system

Instead of having emails archived as .pst files on the user's local hard drive or spread across your file servers, you can use a dedicated archive system for this purpose. Using an archive system enables you to keep all older emails in a single, searchable location for easy access, and enables the IT department rather than the end user to be in control of storage and retrieval. Not only does .pst usage complicate any companywide archiving effort, accessing .pst files over the network can negatively impact your file servers. The Ask the Performance Team blog entry, Network Stored PST files ... don't do it!, offers some insights on this subject. Now that you've reviewed some of the general technical concerns around email systems, it's time to look at some specific security steps you can take. Virus and spam controls are covered next.

Implementing spam and virus controls


Preventing spam

Outside of virus-infected attachments, spam is perhaps your biggest email-related problem. Spam has gone beyond the nuisance "junk mail" stage and developed into a full-blown intrusion. Unwanted email traffic reduces your bandwidth, needlessly clogs your mail server and wastes the time of your staff and end users. Plus, if your mail server gets turned into a spam relay (described later in this section), you've got a real dilemma. There are a number of ways you can minimize or prevent the amount of spam your network receives. The following sections describe some of the more common methods.

Educating the end user

The first, best step is policy-based rather than a technical solution. An email policy should address under which conditions end users disclose their company email addresses. You can significantly reduce the amount of spam received by your mail server by restricting users from posting their email addresses on public websites. To facilitate this, encourage users to use a secondary or personal email address rather than their primary business email address when engaging in communications that aren't strictly business-related. Create a policy whereby users aren't allowed to respond to spam, especially by clicking a link that states it'll be used to remove them from a spammer's mail list. This is a common method used by spammers to confirm that the email address is valid. Once an address is validated, the spammer can easily overload your mail server.

Filtering spam at the mail server

You can use various software applications to filter email at the point it enters your system, routing any mail identified as spam to a separate destination. Usually, spam filters examine the header and body for any keywords or terms that usually appear in spam mail. Because no spam filter is perfect, some spam will still get through. Also, some mail identified as spam might actually be legitimate mail. For this reason, don't configure your spam filter to automatically delete mail tagged as spam. You should review any mail marked as spam to make sure no legitimate mail has made it into the spam folder.

Filtering spam at the client

You can also filter mail using the mail client on each individual PC in your organization. Filtering at the client rather than the server, however, can result in more work for your IT staff, depending on how many computers you're responsible for. When it's reasonable to filter at the client level, you can use Outlook's built-in spam filter feature, or install a third-party spam filtering program to accomplish the same purpose.

Creating a black list

Some organizations keep lists of known spammers, and you can access one of these lists and use it to filter out any mail from them. This solution is best to employ on your mail server or a gateway device, such as a firewall, to prevent spam from entering your system.

Conducting reverse DNS lookups

This solution isn't quite as effective as it once was. In the past, spammers frequently used spoofed or invalid IP addresses that didn't match the domain name they were accessing. Using reverse DNS lookups, if your mail server received mail from an IP address that didn't match the domain name, the mail would be tagged as spam. However, spammers use spoofed IP addresses less frequently now. Also, this method can result in some false positives, marking legitimate mail as spam.

Preventing spam relaying

Keeping your email system from being used as a spam relay is a critical job. An open relay is an email system that, intentionally or otherwise, enables anyone to send email through it. Companies and organizations that don't have the technical expertise available can inadvertently enable their email systems to become open relays, enabling spammers to take full advantage of that email system. Preventing email relaying is important for all email systemsboth inside your firewall and external to itbut it's critical for external systems because they interact directly with the internet. You may have softer controls for internal email systems. A relay uses your bandwidth and degrades your server's performance. Plus, if you're identified

(however falsely) as a spammer, you'll be added to a black list. That means that mail originating at your mail server will be identified as spam by at least some businesses, preventing or at least delaying email communications with your partners and customers, which can impede commerce. Here are some strategies you can implement to protect your mail server: Limit relaying: Restrict your mail server relay service to use only specific IP addresses or, even better, require authentication. Change default passwords: Even if you require authentication, if your postmaster account's password is set at the default, it won't be long before a spammer figures it out and freely sends spam through your server. Change the default password, rename the account or even disable it to prevent it from being used against you. Set time-out for failed Simple Mail Transfer Protocol (SMTP) commands: Spammers try to use invalid SMTP commands to gain control of mail servers. If you enable spammers to issue an unlimited number of commands, they can eventually compromise your server. Most mail server software has a feature that, when configured, drops the connection after a certain number of failed commands. Block known spammer IP addresses: This is the same concept as creating a black list. You can identify the IP addresses from which spam originates and block those addresses at your firewall. You might also consider blocking a range of addresses because spammers use numerous IP addresses within a single or multiple IP ranges. Monitor your mail server: Periodically monitor traffic to and from your mail server using a packet sniffer, such as Snort. You can then determine if your server has been compromised or if an attempt is underway to turn it into a spam relay. Keep up with security patches: Vulnerabilities are occasionally discovered in software, and spammers along with others can exploit those vulnerabilities and waltz right through your security. Make sure your mail server is patched with the latest updates.

Stopping email-borne viruses and malware

Email-borne viruses are still rampant on the internet, so you need to protect your email systems, its users and the network as a whole. Because email is a revolving door into your network, it's a likely place for viruses and other forms of malware to enter your protected internal network. Some steps you can take to combat the threat of email-borne viruses and malware include:

Protecting email systems

Now that you've explored some of the basics of email system spam and virus controls, you're ready to learn methods of email system security. Read on. Securing your email system, in general, includes some of the same practices applied to preventing spam, and general network security practices described in Lesson 2. Some email-specific security issues and procedures are covered in this section.

Implement an antivirus solution: Select an antivirus product that's certified to work with your email system by the vendor who created your email system. Failure to do will result in unpredictable system performance, and can render your antivirus controls ineffective. Implement multiple levels of protection: Implement anti-virus scanning at your internet SMTP gateway, usually located in your demilitarized zone (DMZ) network. Also implement antivirus scanning on your internal email systems to provide multiple levels of protection. Ensure that required services and process are running: Use monitoring of some sort, as discussed in Lesson 4, to ensure that critical antivirus services and processes are running. Ensure that threat definitions are regularly updated: Without up-to-date threat definitions, antivirus software will be of little use. Ensure that workstations have antivirus software installed: As part of a defense-in-depth approach, ensure that all workstations have the appropriate version of antivirus installed and that it's up to date. Educate users about threats and proper responses: Users who are educated about what to do, and more importantly what not to do, when they receive a suspicious email or file will become a valuable part of your overall antivirus protection strategy.

Defining and enforcing email use policies

Educating your end users and enforcing pertinent email rules and guidelines are essential to securing

your email system. As mentioned previously, your email policy should include rules for deleting and archiving emails, and acceptable use policies such as not opening email attachments and not replying to spam. In addition, your email policy should prohibit: Every policy should include a privacy and confidentiality clause stating that information contained in organizational emails is privileged and belongs to the company. This includes trade secrets or any other information that, if released, would result in security being compromised and loss of profits.

Unprofessional language, including racist and sexist remarks, profanity or any other offensive language Circulation of jokes, chain letters or other similar material Use of email for a personal business or other individual gain, while still allowing occasional personal usage that doesn't interfere or conflict with the organization and the employee's job function

Configuring email systems for security

Some of the additional methods you can use to protect your email systems are:

Although an email is addressed to a particular user, the company owns it. The user functions as an agent of the company and not as an autonomous entity. All emails sent and received using the company's network and email systems belong to the organization and are considered official documents. Keep all email system security patches current to prevent a malicious person from exploiting a known vulnerability. Configure your firewall to detect distributed denial of service (DDoS) attacks to prevent a hacker or cracker from bringing down your email system. Keep email systems separate from other server systems, such as web servers and file sharing servers. In small organizations, it seems to make sense to use one physical server to provide multiple services to the network. However, if an intruder compromises a service, he potentially has access to all services running on the server. Implement encrypted web mail options for users, such as Exchange Server's Outlook Web Access (OWA) functionality. Ensure that all email traffic is encrypted using the strongest form of encryption available. Limit who can connect to your email systems by requiring remote users to connect to the internal network using virtual private networking (VPN). Require valid email users to authenticate to the network before accessing their emails.

Protecting remote access to email

Remote users, such as those who travel for business or telecommute, need to access their emails. However, this access adds another layer of server and network vulnerability. Implement these methods to protect remote access to email:

Testing your email security methods Moving on

Test any security method you want to implement before rolling it out to the production environment. You can't afford to assume your security system will work, no matter how well you think you've designed it. Several testing methods were covered in Lesson 2 and include vulnerability scanning, penetration testing, virus detection, intrusion detection and password cracking. You can also perform integrity checks on emails to ensure a message you've received is the same one that was sent to you. In this lesson, you explored email management, covering mail storage and access issues, minimizing spam and securing your email systems. In Lesson 6, you'll learn about creating and implementing a data disaster recovery plan for your business. Before moving on, complete the assignment and take the quiz for this lesson.

Assignment #5

Lesson 5 refers to three U.S. laws or regulations that impact the use and operation of an email system: 17 CFR 240.17a-4, commonly referred to as SEC Rule 17a-4 Sarbanes-Oxley Act HIPAA

Use a search engine to visit the links provided in Lesson 5 to determine how these regulations affect the position of a senior network administrator. Consider these questions: Keep notes while reviewing information about the regulations that impact your organization and cite your source(s). Use this information to refine your organization's email policy, if necessary. 1. Does this regulation affect all businesses or only certain types? 2. How, in general, are emails to be stored? 3. Which facets of this regulation are significant to your business?

Quiz #5
A) B) A) B) A) B) A) B) C) D) C) D)

Question 1: Which of the following require tougher accounting, reporting and certification rules around the financial details of a publically traded company in the United States? Question 2: Which of the following are effective ways to minimize or prevent the receipt of spam mail by your end users? (Check all that apply.) Question 3: True or False: You don't need to invest in multiple layers of antivirus controls because the predominate threat to email systems today is from spam. Question 4: Which of the following are problems that can plague an email server? (Check all that apply.) C) D) DDoS attacks Viruses Spyware Exploitation of a known vulnerability True False Prohibit end users from replying to spam mail. Limit open relay on the mail server. Filter spam on the mail server. Filter spam on the client. SEC Rule 17a-4 HIPAA Sarbanes-Oxley Act SEC Rule 17a-3

Exploring data backup fundamentals

Creating and implementing a data disaster recovery plan


Welcome back. Lesson 5 helped you understand email management, storage

The best way to recover from a disaster is to be prepared before it happens. In this lesson, you'll learn how to create a data recovery plan that'll have your critical systems up and running as quickly as possible after a disaster strikes.

Regular backup

and access issues, and how to protect your email systems. This lesson, the last in this class, helps you understand data protection. You'll start by reviewing the essentials of data backups, and then creating and implementing a data disaster recovery plan for your business. Accidents and disasters will happen periodically, and you're responsible for ensuring your systems and network are ready when they occur. You can read a newspaper or watch TV on any given day to learn about the latest natural disaster, incident of data theft or some other problem that could affect your IT environment.

The main purpose of backing up data is to have the ability to recover it later, if necessary. Your goal should be to make data restoration as easy, quick and repeatable as possible, which requires careful planning. Before moving into the more advanced topics of this lesson, let's spend some time reviewing core concepts of backup and recovery.

There is a reason that tape backup is so popular: it's affordable, reliable, scalable for any size business, and can be programmed for scheduled unattended backup, making it the ideal choice for your business. HP DAT tape drives

Basic backup types

Most commercially available data protection software applications rely on four basic types of backup operations. Although some of the larger and more complex applications blend them together to create additional options, the four basic backup operations are: This is a brief review of backup types, and it assumes anyone at the advanced networking stage knows the details that have been omitted. Full: Backs up every file and folder in the configured backup scope to storage media, such as tape or disk. Full backups usually take the longest amount of time to complete but result in shorter recovery times.

The status of the archive bit isn't considered when a full backup processes the file system during its run; however, a full backup usually resets the archive bit on files that are backed up.

A differential backup doesn't clear the archive bit on files. This results in the same files being backed up each time the differential backup runs. A copy backup isn't usually part of a data protection schedule. Instead, you'd ordinarily perform a copy backup immediately before and after a planned maintenance event that makes changes to a server systemfor example, the installation of a service pack or other operating system updates.

Incremental: Backs up files that have changed since the last backup of any type. Incremental backup jobs tend to complete quicker because they're smaller than full backups. However, restoring incremental backups can take a significantly longer amount of time depending on how many days it's been since the last full backup. Differential: Backs up files that have changed since the last full backup. Differential backups take longer than incremental backups as the number days since the last full backup increases. However, data restoration takes less time because you only need the last full backup and differential backup. Copy: Similar to a full backup, but the archive bit isn't changed when a copy backup is performed. This enables you to run a special copy backup at any time without disturbing your normal backup tape rotation.

A fifth type of backup, which is the most comprehensive, is the continuous

backup. Files are constantly backed up as changes are made, usually to a storage system over the network or the internet. Historically, only missioncritical data, such as a customer ordering system, was deemed necessary for continuous backup. However, with the price of continuous backup solutions becoming very affordable, some businesses are switching to continuous backups for all business data.

Backup considerations

The following sections address key considerations when planning a backup strategy.

How much time do you have to perform the backup?

Several factors affect the amount of time required to perform a backup:

How much time do you have to restore data?

After a disaster strikes or an employee accidentally deletes an important file, how much time do you have to recover the data? This value, in hours, is referred to as the recovery time objective (RTO). Your ability to meet the RTO will be contingent on several factors, including:

The amount of data you need to back up The type of backup media you select (LTO-4 versus disk-to-disk-to-tape, for example) The type of backup (full, incremental, differential, copy or continuous)

How much data is acceptable to lose if you need to restore?

It's difficult to protect every change to every document on every server system at all times because backup jobs generally run on a set schedule, most commonly once per day. If an employee creates or changes a document after the backup has run and then damages or loses that document before the next scheduled backup, the work might not be recoverable.

The amount of data to be restored: Do you have to restore a single file or an entire data store? The number of restores to be performed: Do you have to store one full backup plus three incrementals, or just one differential? The type of backup media in use: Are you restoring from physical media or over the network? Newer tape media such as LTO-4 work with high-speed drives that greatly reduce the amount of time required to restore data. The location of the backup files: Is the data on media physically located in an offsite storage location?

The recovery point objective (RPO) is a measure of the acceptable amount of data you can lose, in hours. For example, your organization might define its RPO as 2 hours, which means you must restore data to within 2 hours of any disaster striking. Data created or modified within that 2-hour range is considered an "acceptable loss." The RPO helps you determine how often you need to perform backups. Your RPO might require you to always perform a full backup or a continuous backup instead of a full backup and incremental backups, especially for databases such as Microsoft SQL Server, Exchange Server and Oracle. A continuous backup ensures recovery to any specified point in time.

What type of media can you afford to use?

As mentioned previously, different media types have different speeds and capacities. The fastest backup media type is disk, used in disk-to-disk backup systems. Many companies use the disk-to-disk backup method, and then put the backup data on tape media for long-term storage as well as remote offsite storage. Regarding tape media, LTO-3 and LTO-4 are popular types used today. LTO-4 media is newer than LTO-3, is faster, has more capacity and offers 256-bit Advanced Encryption Standard (AES) encryption of the data on the tape when used with a compatible tape drive.

Backup schedules

When planning your data protection solution, you also need to determine which type(s) of backup scheme, or schedule, to use. Implementing multiple backup schemes introduces additional complexity into your plan, but it's sometimes unavoidable if you run a heterogeneous environment. There are two basic schedule types, although there are many more variations of these two types.

The type of media you use to hold your backups impacts your RTO by enabling the restore to occur quicker, and the RPO by enabling you perform backups more often.

Five-tape system

The five-tape system is one of the simplest backup schedules. You run a full backup every Friday night, which is allowed to run into Saturday to complete if required. You then create incremental, differential or full backups Monday through Thursday, depending on how long the backup job runs. Because this backup schedule gives you one week of backup history, it isn't suitable for mission-critical systems. It's recommended you increase the number of backup tapes to 20 or 25, giving you four to five weeks of backup history.

Grandfather-father-son

The grandfather-father-son (GFS) backup system is also known as the monthly-weekly-daily system and uses three different sets of backup tapes:

You can expand the GFS system in many ways: the most common is to perform daily backups on Friday as well, and weekly backups on Saturdays except for the last Saturday, when the monthly backup is performed. Using the GFS system gives you a full year of backup history at the specified points in time.

Use daily tapes, Monday through Thursday, to perform differential or incremental backups, although full backups can be performed on smaller systems. Use a weekly tape each Friday to perform a full backup of the system. Use 12 monthly tapes, on the last Friday of each month, to perform a full backup of the system.

Planning for problems and setting a backup and restore plan into motion is part of a network administrator's responsibilities. In the next section, you'll look into what it takes to lay a foundation for a backup plan.

Laying the foundation for backups

The foundation for an effective backup strategy begins with centralized storage. You must ensure that all critical business data is stored on a centralized server rather than on local PC hard drives. Administrators in some small office environments install individual tape devices on PCs and direct end users or power users to conduct daily backups. However, this places too much responsibility on the end user. Without complete assurance that all business data is being backed up, you might have holes in your backup strategy and thus your recovery policy. In the event of a disaster, some critical data could be lost forever, impacting the company's ability to do business and maintain profitability. In a Windows Active Directory environment, you can rely on a centralized backup solution for client data by setting Active Directory group policies to redirect data from local storage on the PC to your centralized server. This is a transparent process that requires no intervention by the user. You can redirect your end users' Documents folder to a folder on the central server, ensuring that all user data is backed up when the server is backed up. Mobile users can use the Offline Files feature in Microsoft Windows XP and Windows Vista to cache a copy of the network file on their hard drives that's automatically uploaded to the network share once the user connects to the network.

Software to manage your server

HP OpenView software makes it easy to manage the performance as well as to set up recovery and backup of your servers. HP StorageWorks data protector express software

Preserving network availability during backups

Generally, backups require a significant amount of bandwidth that can inhibit other forms of network traffic. This is one of the main reasons why backups are scheduled during off hours when few or no users are accessing the network. A common problem is that, as business needs increase in a multi-shift environment or in a company with international customers in different time zones, the window of availability for backups shrinks. One solution is to create an isolated network just for backups. You could multihome your servers so each has a second network adapter used to connect to the backup network. The drawback is the expense involved in implementing a separate network segment just for this purpose, but it leaves the rest of the LAN relatively unaffected.

As you read previously, conducting incremental backups take less time than a differential, but the trade-off is that the restore process takes longer. Consider combining incremental backups with multistreaming and backing up to multiple tapes simultaneously to reduce the amount of time backups are conducted.

Using a single backup program strategy

Use a single program to perform backup functions across all hardware platformsservers, desktops and notebook PCsas well as application and database data. This program should also be able to adapt mobile backup functions, disaster recovery planning, data archiving and multiple backup methods. Select a program you can access over the intranet using a web interface for ease of use. You can manage these operations onsite or remotely to control backup operations and handle emergencies anywhere they arise.

Instituting a compliance plan

As you learned in Lesson 5, there are many laws and regulations that could

have an impact on your data and how it's stored. The burden falls on you and your legal department to develop a data storage and backup policy that complies with all relevant and current laws. If your data storage methods are out of compliance, an audit could result in substantial fines brought against your company. Ensure your plan is in compliance, and when you make changes to this system, verify that changes haven't adversely affected how data storage is properly and legally managed.

Delegating responsibility

Someone on your staff must be responsible for managing the backup and restore system at all times. If the responsible party is ill or on vacation, ensure that a substitute person is always in place to attend to changing and cycling tapes, and more importantly, emergency restore functions.

In a larger environment, you could rotate backup and restore responsibilities among all IT staff members, with one person in charge of checking the status of each person's work. Another method is to create a backup and recovery team that divides monitoring responsibilities among themselves and, in a crisis, works together to restore the network. IT departments are busy places, and a lack of organization and planning can result in details being overlooked. In addition, a disaster or incident can occur at any time, not just during business hours. At least one trained IT staff member should always be on call if a problem arises in the middle of the night or on the weekend. Such occurrences should be rare, but when they happen, you should always have someone available to respond to a crisis.

Optimizing backup performance

Even if, initially, your system performs at peak level, it's unlikely that it'll stay that way. System performance tends to degrade over time, and your backup system requires general maintenance and troubleshooting. In the next section, you'll take a look at some of the ways you can optimize performance.

Once you've established the foundation for your backup and restore plan, there are still a number of tasks you can perform to optimize backup performance.

Leveraging current infrastructure

Databases have a habit of growing with amazing speed and, if insufficiently monitored, can consume storage capacity at an alarming rate. Therefore, it's prudent to monitor end users' data storage allotment. In addition, you might need to limit storage to necessary business-related data only.

Use load balancing between your current backup servers to manage resources and measure backup server throughput to monitor performance. Although redundancy is good in terms of fault tolerance, a poorly designed system can result in undesirable and unintended redundancy. Analyze your current system and eliminate any redundancy that doesn't fulfill your desired purpose. Finally, adjust your system if backups are running too slowly across the network. Even under ideal circumstances, you have a backup window to stay within, and timing is crucial.

Meeting future challenges

No matter how well you optimize your system, your needs eventually grow. To avoid future problems, budget for and purchase additional capacity that matches your projected growth in data size. Rather than reacting in crisis mode when you realize you don't have sufficient ability to back up all of your data, effective planning will keep your data protection systems in step with those systems they're supposed to be protecting. When increasing your storage capacity by adding more hard drives or additional backup servers, your network infrastructure must also be able to manage the load. If you added 150 new users to the main office recently and your branch offices are also growing, you need to supply network backups with sufficient bandwidth so that all of the data can reach the backup servers and be stored in a timely fashion. Even when backing up in off-peak hours, you still have only so many hours to conduct the backup. Because of other maintenance tasks, you can't depend on 100-percent processing power and network availability dedicated to backups. Upgrade your hardware and network infrastructure as needed to provide sufficient ability to back up the growing amount business data.

Securing data backup and storage

It's possible for your data to be at risk of interception and theft during the backup process and while in storage. This is especially true if you back up across public lines, such as from a branch office server to a server at the main office. To protect data in transit, the best method is using Internet Protocol Security (IPSec) over a Virtual Private Network (VPN) tunnel to ensure security. Even if the data is intercepted, it's encrypted and unable to be read. In addition, encrypting data in storage protects it if your storage medium is compromised. This can occur on the hard disk of a backup server or if your portable backup media falls into the wrong hands. Even if you take these precautions, if you suspect your data has been compromised, report the incident to law enforcement as well as the appropriate business managers and your legal department. To maintain the security of backup media, keep your portable storage in a tape vault or some other secure location. Smaller businesses with limited budgets can use a safety deposit box or offsite safe. The location must be readily accessible to authorized staff should the media be needed for a recovery procedure.

Storage media management

Managing your backup tapes is more than just switching them out and making sure they're properly stored. There are a number of Layer 1 issues that come with using storage media repeatedly over long periods of time.

Dirty or damaged tapes and tape drive heads

This can result in errors being introduced to your backed up data, impairing your ability to perform a complete and accurate restore. Worse, it could result in garbage being written to your storage media instead of data, resulting in the loss of everything. Not only do tapes get dirty, but also they can become creased or otherwise damaged, resulting in lost data.

Tape wear

Pay attention to the manufacturer's recommendations. If the tape is rated for

1,000 recordings, don't stretch it. The risk of losing your data is too great. Buy new tapes and retire older media securely, making sure they're stored in a locked safe or destroyed. Don't just throw old media into the nearest dumpster where anyone can access them.

Long-term storage

If you intend to keep backed up data over the long term, make two copies in case one becomes lost or damaged. If you have data stored long-term on outmoded or obsolete media, transfer the data to a more modern storage method.

Testing data restoration

Now that you've optimized your backup plan for your needs, you still need to know if it actually works. Waiting until a disaster strikes to test the plan is inviting more problems than you ever want to have. Read on.

Set a regular schedule to test your recovery system so you're prepared if a disaster strikes. Also, remember that you're not just testing whether the system works but how quickly it works. How long can your business afford to remain offline without access to critical data? The predetermined values of RTO and RPO will come into play.

Putting the plan in writing

The first part of testing data restoration is developing and documenting a plan. Start by going into problem-solving mode and try to anticipate everything that could possibly go wrong, including worst-case scenarios. Use that information to develop different sections of the plan. Whereas backup and restore tends to focus on network servers, in a disaster, multiple parts of your network infrastructure can fail or at least be impaired. A router can malfunction, for example, or a network conduit can be broken. Your recovery plan should take into account all of the different aspects of the overall system and how to respond when faults occur. One method is to create an overall disaster management plan that addresses aspects of recovery after a disaster, with the following individual sub-plans: Systems: Covers handling of server faults and general restoration of data, applications and services Network: Focuses on bringing up internetworking devices, such as routers and switches Communications: Coordinates how different organizations are contacted, such as law enforcement, company management, hazardous materials personnel and Federal Emergency Management Agency (FEMA), if necessary. The most common reason for performing a data restoration is the accidental deletion of files by users.

If your recovery depends on a restoration of a full backup and one or more incremental backups, the failure of any one of those backups will cause you to miss your RPO.

These different parts of the plan can easily map to different teams in a larger

organization.

As your network changes and grows, so must your plan. Conduct periodic reviews of your backup and recovery plan to keep it current.

Testing the team

When delegating responsibilities, don't forget to assign the task of performing a recovery in the event of a disaster. Depending on the size of your firm, that task could fall to a single individual or to a team. Even in a small company, you might decide that most or all of the IT staff should be involved in the recovery process, depending on the scope of the emergency. When you test the team, you're testing how well it implements the recovery plan and how the members of the team mesh in their tasks. If one team member has successfully corrected the hardware fault, does she have to wait for the tapes to be made available to initiate the recovery? If the tapes and the servers are ready, is there a delay in restoring the correct configuration files to the local switch? Testing the team is like running a fire drill. You not only find out how well they work together, but also where the faults and gaps are in performance and, to some degree, the plan itself.

After you've gathered your data and found where you can make improvements, you should amend the plan and run your drill again.

Running test levels

Because you can face different types of disasters, you should run different types of tests. One of the most common tests is restoring data from tape in the event of a data loss. Any test you run must be conducted in off-peak hours when few or no end users are on the system. Planning for the occasional weekend testing "party" is a small price to pay for the relative security of knowing your recovery plan works. Beyond restoring data to a server, you can also introduce issues to different parts of your system and see how quickly those issues are addressed. Depending on how extensive you want to be, you can announce where the problem lies or allow your staff to attempt a diagnosis based on certain symptoms you announce.

Ensure everyone knows their role ahead of time so they can participate efficiently in testing the recovery plan. Also, although backups are conducted regularly, recovery operations are rarely performed, so make sure your staff is familiar with how to perform a server recovery and deal with all equipment and network connectivity.

Moving on

Assignment #6

In this lesson, you learned how to create and test a data backup plan, and picked up tips for restoring data, systems and your network after a disaster. Throughout the class, you learned best practices and many advanced systems and network administration techniques. Before moving on, don't forget to complete the assignment and take the quiz for this lesson. We hope you found this class useful and would like to explore additional HP Learning Center classes, quick lessons and how-to demos.

For this assignment, you'll work on a protection plan for your company's data, systems and network. Follow these steps:

Quiz #6
A) B) A) C) D)

Question 1: What are some common elements of a backup plan? (Check all that apply.) Increasing the speed of backups to optimize network availability Selecting a backup media scheme Rotating backup and restore responsibilities among IT staff

Systems: Cover how to handle server faults and general restoration of data, applications and services. Network: Focus on bringing up internetworking devices, such as routers and switches. Communications: Coordinate how different organizations are contacted, such as law enforcement, company management, hazardous materials personnel and FEMA, if necessary. (Optional) Visit well-known vendor websites such as Symantec.com, McAfee.com and HP.com. Review the backup and restore solutions they offer, and compare them to the solution(s) you're currently using. Are there new features or an entirely new solution you'd like to implement on your network? What are the cost and infrastructure requirements? Compose a memo to the decisionmakers in your company supporting your request for the new acquisition. Redirecting user Documents folders to a directory on a central server

1. If you already have a protection plan in place, review the most recently updated version. Note any information that's outdated, and suggest modifications to make it more comprehensive. 2. If you need to create a plan from scratch, first consider all of the threats to your data, systems and network. Then compose a rough draft that addresses backup and recovery tasks for the following key areas:

Question 2: Which of the following are appropriate methods to ensure staff is assigned to monitor backup and restore operations? (Check all that apply.) C) A) B) A) B) Having the senior administrator take sole responsibility for backup and restore operations

Using multiple types of backup software to manage multiple types of hardware platforms and data types

B) Assigning a single staff person to be responsible for monitoring backup and restore operations, but having other staff positioned to take over if the primary person becomes ill or is on vacation Question 3: What's the most common reason for conducting a data recovery operation? C) D) Accidental deletion of data Server hard drive crash True Power failure Natural disaster

D) Creating a backup and recovery team that divides monitoring responsibilities among themselves and, in a crisis, works together to restore the network

Question 4: True or False: When backing up data across public lines, use IPSec over a VPN tunnel to ensure security. False 2003 - 2008 Powered, Inc.

Anda mungkin juga menyukai