Anda di halaman 1dari 14

1.

0 Plan and
Design
1.1 Network
Design
1.1.1 Internet Access
1.1.1.1 Device research
1.1.1.2 Speed requirements
1.1.2 Branch site design
1.1.2.1 Device research
1.1.2.2 Voice infrastructure design
1.1.3 Site connectivity
1.1.3.1 Connectivity requirements
1.1.4 HQ Office and Manufacturing Design
1.1.4.1 Device research
1.1.4.2 Wireless design/requirements
1.1.4.3 Distribution/Access design
1.1.4.4 Voice infrastructure design
1.1.5 Datacenter Network Design
1.1.5.1 Device research
1.1.5.2 Network core design
1.1.5.3 Distribution/Access design
1.1.5.4 Device management design
1.1.5.5 Voice infrastructure design
1.1.6 Security Plan
1.1.6.1 DMZ Security Requirements
1.1.6.2 Device research
1.1.6.3 Isolated device eligibility
1.1.6.4 Internet edge security design
1.1..6.5 Datacenter/Branch security design
1.1.7 Service Provider Research
1.1.7.1 Available service providers in each region
1.1.7.2 Common speeds available for business class cable and
DSL
1.1.7.3 Availability of fiber providers
1.2 Systems
Design
1.2.1 Server Requirements
1.2.1.1 Performance requirements
1.2.1.2 Server research
1.2.2 SAN Requirements
1.2.2.1 Storage needs
1.2.2.2 Connectivity requirements
1.2.2.3 Expandability
1.2.2.4 Device research
1.2.3 Terminal Services & File Server Design
1.2.3.1 Terminal Services design
1.2.3.2 Terminal Services performance requirements
1.2.3.3 Domain Controller placement/design
1.2.3.4 File server location
1.2.3.5 File share design
1.2.3.6 DFS design
1.2.4 Vmware Design
1.2.4.1 vCenter/vMotion design
1.2.4.2 VM backup requirements
1.2.4.3 Vmware software research
1.2.5 Workstation Requirements
1.2.5.1 Remote worker workstation research
1.2.5.2 Administrative workstation research
1.2.5.3 Manufacturing workstation research
1.2.5.4 Printer requirements
1.2.5.5 Printer requirements
1.2.5.6 Printer locations
1.3 Software Requirements
1.3.1 Productivity Software
1.3.1.1 Productivity Software availability requirements
1.3.1.2Productivity research
1.3.2 Helpdesk Software
1.3.2.1 Feature requirements
1.3.2.2 Employee numbers
1.3.2.3 Helpdesk research
1.3.3 Imaging Software
1.3.3.1 Workstation models
1.3.3.2 Workstation numbers
1.3.3.3 Workstation locations
1.3.3.4 Imaging Software research
1.3.3.5 Imaging process/design
1.3.4 Backup Solutions
1.3.4.1 Vmware backup requirements
1.3.4.2 Exchange/DC backup requirements
1.3.4.3 SQL Server backup requirements
1.3.4.4 VM Backup software research
1.3.4.5 Exchange/DC backup software research
1.3.4.6 SQL Server backup requirements
1.4 RFP Response
1.4.1 RFP Response Draft
1.4.1.1 Network Design Draft
1.4.1.2 Systems Design Draft
1.4.1.3 Software Requirements Draft
1.4.2 RFP Response Final
1.4.2.1 RFP Response Final Copy
1.4.2.2 RFP Response Final Editing
1.4.2.3 RFP Response Submission
1.5 Project Plan
1.5.1 Project Plan Draft
1.5.1.1 Project Scope Draft
1.5.1.2 Risk Register Draft
1.5.1.3 Project Schedule Draft
1.5.1.4 Work Breakdown Structure/Gantt Chart Draft
1.5.1.5 Stakeholder Management Plan Draft
1.5.2 Project Plan Final
1.5.2.1 Project Scope Final
1.5.2.2 Risk Register Final
1.5.2.3 Project Schedule Final
1.5.2.4 Work Breakdown Structure/Gantt Chart Final
1.5.2.5 Stakeholder Management Plan Final
1.5.2.6 Project Plan Assembly
1.5.2.7 Project Plan Final Review/Edit
1.5.2.8 Project Plan Submission
2.0 IP Address Scheme
2.1 Register for public addresses with ARIN
2.1.2 Contact ARIN for address allocation
2.1.3 Get address costs
2.1.4 Arrange payment for address allocation
2.2 Choose Internal Address Range
2.2.1 Address requirements
2.2.2 Subnet sizing
2.2.3 Summarization points
2.2.4 Summarization addresses
2.3 Subnet/Supernet Ranges
2.3.1 Subnet range final allocation
3.0 VLAN/Switch Layout
3.1 Reference VLAN requirements from planning stage
3.2 Identify VLAN ranges
3.2.1 Management VLAN requirements
3.2.2 Internet edge VLAN assignment
3.2.3 Branch/Sales office VLAN assignment
3.2.4 HQ/Manufacturing VLAN assignment
4.0 Site to Site Communication Plan
4.1 Service provider negotiations
4.1.1 SP initial contact
4.1.2 SP contract negotiation
4.1.3 SP contract finalization
4.2 Site to Site Security Plan
4.2.1 Identify inter-site communications requirements
4.2.2 Security device RFQ Process
4.2.3 Security device purchases
4.2.4 Security device installation
4.3 Site to Site Voice Plan Finalization
4.3.1 Confirm voice plan
4.3.2 Voice gear RFQ Process
4.3.3 Voice gear purchases
4.3.4 Voice gear installation
4.4 Site to site Design Plan Finalization
4.4.1 Site to site RFQ Process
4.4.2 Site to site gear purchases
4.4.3 Site to site gear installation
4.5 Service Provider CPE Installation
5.0 Communication Backup Plan
5.1 Identify Communication Failure Points
5.1.1 Isolate communication failure points
5.2 Identify Communication Backup Paths
5.2.1 Communication backup plan implementation
5.3 Create Communications Failover Process
5.3.1 Comms failure process draft
5.3.2 Comms failure process editing
5.3.3 Comms failure process final submission
6.0 Network Security Design
6.1 Internet Border Security Configuration
6.1.1 Implement Security/Device Configuration on Internet Border devices
6.1.2 Confirm configuration/test
6.1.2 Configure Remote Access firewalls
6.2 Branch Site/Sales Office Security Configuration
6.2.1 Branch Site/Sales Office termination device configuration
6.2.3 Confirm configuration/test
6.3 Site to Site Security Configuration
6.3.1 Implement site to site VPN and communication configuration
6.3.2 Confirm configuration/test
6.4 Datacenter Security Configuration
6.4.1 Implement and configure datacenter hardware
6.4.2 Implement security design
6.4.3 Confirm configuration/test
6.5 HQ Office & Manufacturing Security Configuration
6.5.1 HQ Office device configuration
6.5.2 HQ Office security design implementation
6.5.3 Confirm HQ configuration/test
6.5.4 Manufacturing device configuration
6.5.5 Manufacturing security implementation
6.5.6 Confirm Manufacturing configuration/test
7.0 DHCP Design
7.1 Identify Dynamically Addressable Network Segments
7.1.1 Allocate address ranges
7.2 Identify DHCP Server Locations
7.3 Identify DHCP Options
7.3.1 Identify Default Gateways
7.3.2 Identify DNS Servers
7.3.3 Identify WLC's
7.3.4 Identify DNS Servers
7.3.5 identify Windows KMS Activation Servers
7.4 Setup and Configure DHCP Servers
7.4.1 Configure Datacenter DHCP Servers
7.4.2 Configure HQ/Manufacturing DHCP Servers
7.4.3 Configure Branch/Sales office DHCP Servers
8.0 Active
Directory
8.1 Setup HQ/Datacenter Domain Controllers
8.1.1 Virtual Machine Setup
8.1.2 Domain Setup
8.1.3 Domain Controller Configuration
8.1.4 Certificate Authority Setup
8.1.5 Domain testing
8.2 Setup HQ/Datacenter Servers
8.2.1 Virtual Machine setup
8.2.2 File Server Setup
8.2.3 DFS Setup/Configuration
8.2.4 DFS Testing
8.2.5 Share Testing
8.2.9 SQL/Database server setup
8.2.11 SQL/Database Server testing
8.3 Setup Branch Office Servers (File, DC, DNS)
8.3.1 Setup Branch Office Servers
8.3.2 Configure AD Services
8.3.3 Configure File Services
8.3.4 Configure DNS Services
8.4 Test and Verify Domain Functionality
8.4.1 Confirm Domain replication
8.4.2 Confirm Email Functionality
8.4.3 Confirm DFS Replication
8.4.4 Confirm SQL Server functionality
9.0 DMZ Configuration
9.1 Configure Spam and Email Servers
9.2 Configure External DNS Servers
9.3 Configure Remote Access Appliances
9.4 Configure DMZ Network Security
9.5 Configure External Web Servers
10.0 HQ Servers
10.1 Configure HQ TS Servers
10.1.1 Setup Virtual Machines
10.1.2 Install necessary applications
10.1.3 Test applications
10.2 Configure HQ Database Servers
10.2.1 Setup Virtual Machines
10.2.2 Install SQL Server
10.2.3 Confirm database operation
10.3 Configure HQ Voice Servers and Switches
10.3.1 Setup Virtual Machine
10.3.2 Setup Voice switches and T1 lines
10.3.3 Configure Voice infrastructure
10.3.4 Confirm Voice network functionality
10.4 Test and Verify Server Functionality
10.4.1 Test HQ to Datacenter functionality
10.4.2 Perform final baseline testing
11.0 Faculty Demonstration and Final Report
11.1 Faculty Demo
11.1.1 Faculty Demo Presentation Prep
11.1.2 Faculty Demo Presentation Finalization
11.1.3 Faculty Demo
11.2 Final Report Draft
11.3 Final Report Submission

IP Addressing and VLAN Scheme

Falcon Industries requires a large amount of device on the network, with the ability to easily expand
those numbers as they see fit. Because of this requirement we will be using the 172.16.0.0/12 RFC1918
private address range for FI’s internal network. Due to Interop recently returning their /8 address range
we will be requesting a portion of that range for FI’s public IP range. To be specific we will be
requesting 45.0.0.0/25, which will give FI 126 (two addresses are lost for broadcast and network)
available addresses for public access. We’re allocating such a large range to ensure that they have
enough addresses to allow for a NAT overload pool of two addresses, and the various static NAT entries
they will require. The exact IP allocation and VLAN scheme has yet to be confirmed by the team but will
be of top priority within the coming days.

Network Design

Routing
A large amount of routing will be required in the datacenter core, internet edge and between the
branch office/sales offices. In keeping with the scalable requirements of the network OSPF will be used
within the datacenter core and out to each branch site/sales office. Because of the largely remote
nature of FI’s employees a little more control will be required at their network edge over routing. This
requirement is satisfied by using BGP at the network edge between the border routers and ISP routers.
Static routing will be used between the ISP and FI, and will be further used by the internet edge Cisco
ASA’s. The “inside” interfaces of the ASA’s will be taking part in OSPF and will be redistributing the
default route into OSPF. OSPF will also be extended over the WAN to the branch offices, and sales
offices; however they will only be configured as stub-area networks. All routing protocols will be
configured using authentication with timed key chains, rotating keys on a daily basis. Please see the
following list for a breakdown of the network areas and the necessary routing configuration

Internet Edge
Internet Edge Border Routers

- BGP accepting a partial internet routing table while sending out FI’s prefix’s
- BGP configured using authentication with a keychain using rotating keys
- GLBP will be configured on the inside interfaces of the BR’s, and interface tracking will be used
- Will consist of 2 Cisco 7206VXR Routers with a PA-G2 network engine, and 4 PA-GE 1Gbps fiber
modules
- An interface on each BR will be used to feed into the remote access firewalls, which will be
assigned public IP’s on the external interface

Remote Office Aggregation Routers

- Will consist of 2 Cisco 7206VXR Routers with a PA-G2 network engine, and 4 PA-GE 1Gbps fiber
modules
- Will be used primarily to aggregate ISP connections into the routers, and pass traffic to the
remote office firewalls
- Will use OSPF (area 0) over the WAN, using authentication with a keychain using rotating keys

Internet Edge Outside Aggregation Switch


- Configured purely as a Layer 2 switch utilizing a few local VLANs to aggregate traffic from the
BR’s, Branch Office/Sales office firewalls, and remote access firewalls.
- The BR’s and inside firewalls will be configured to be on the same VLAN – this is a requirement
for proper firewall failover
- The remote site firewalls will be configured to be on the same VLAN as the remote site
aggregation routers – this is a requirement for proper firewall failover
- 2 Cisco Catalyst 4506-E switches, each with dual 6L-E supervisor modules and dual 6 port gigabit
line cards

Internet Edge Firewalls

- Default routes will be used between the BR router’s GLBP virtual interface – this will ensure that
traffic can always be forwarded even if a router should go down, and will also load balance it in
the mean time
- “Inside” interface will aggregate several inside VLANs for things such as the firewall failover
VLAN, DMZ, etc and will participate in OSPF – Area 0
- 2 Cisco ASA 5550 firewalls will be used
- Will establish and aggregate site to site VPN tunnels between the sales offices and the HQ that
are delivered over the public internet

Remote Site Firewalls

- Will establish and aggregate site to site VPN tunnels between the branch offices and sales offices
- Inside and outside interfaces will take part in OSPF – Area 0
- Access between sites will be controlled at this point using access control lists
- 2 Cisco ASA 5550 firewalls will be used

Remote Access Appliances

- 2 Juniper SA 4500 appliances will be used


- Will perform termination of remote worker VPNs, SSL VPNs
- Will be configured using the domain controllers for authentication
- Terminal Services applications will be made available to remote workers through the appliance
- External interfaces will reside directly on the BR’s and access will be made available by the BR’s
NAT processes
- Internal Interface will terminate on the Internet Edge Inside Aggregation Switch in its own VLAN
with controlled access to the HQ network

Internet Edge Inside Aggregation Switch


- Will consist of dual Cisco Catalyst 6509-E’s with dual 720 Supervisor engines, 4 10Gb line cards, 2
DFC cards and dual 48 port Gig-E line cards
- DMZ, network core, S2S VPN, and Remote access VPN accessibility will all be controlled at these
switches
- EtherChannels will be configured between the two for redundancy
- Will pass traffic from the firewalls over several VLANs as outlined above
- EtherChannels will be used between the datacenter core and IAS switches
- DMZ devices will be plugged directly into this switch, all on the same VLAN, with access
controlled by ACLs, and private VLANs
- Will participate in OSPF Routing – Area 0

Switching
Based on the designs proposed in the network design stages of the project, we will be going forward
with a design that consists of a mixture of Cisco Catalyst 6500, 4500, 3750, 3560, and 2960 switches
throughout the company. By utilizing the 3750 series and higher switches we are able to take advantage
of many of Cisco’s redundancy features that are built into the switches, ensuring optimal uptime within
the network. The idea behind the design is that every device and path is expected to have a duplicate
device on the network to provide a backup path/service.

The HQ Datacenter core and office/manufacturing distribution and access layers are going to be built
using a combination of Cisco 6509-E, 4506-E, 3750X and 2960 series switches and various line cards in
the chassis switches. The sections below will outline the main configuration and technology points of
each building and network layer.

HQ Office and Manufacturing Access & Distribution


Starting with the access layers of the HQ office and manufacturing building, we will be using Cisco
2960S-48FPS-L’s. This particular model of switch was chosen for its ability to leverage Cisco’s FlexStack
technology, allowing us to combine multiple physical switches into one logical switch which will reduce
the complexity of the Spanning-Tree configuration, and allow for greater flexibility when configuring
EtherChannel links to the distribution layer increasing redundancy and uptime. By combining the
switches we’re able to perform cross-stack EtherChannels meaning that we can have half of the uplinks
terminate on each distribution switch as one or multiple EtherChannels, allowing huge amounts of
throughput, and allowing us to have one single failover spanning-tree path rather than multiple paths
which would increase the complexity and risk for bridging loops.

Layer 2 Quality of Service will be applied at these levels to ensure timely delivery of voice and video
traffic up to the distribution layer. The switches will also be providing Power over Ethernet Plus, which
provides up to 30 Watts of power to end user devices and wireless access points, which will be put in
place at both the HQ office and manufacturing buildings. The wireless access points that were chosen
for FI’s wireless deployment are the Cisco 1142N WAP’s, which provide dual radios allowing for use of
802.11N’s Multiple-Input, Multiple-Output (MIMO) technology providing up to 300Mbps of throughput.
The distribution layer in the HQ office and manufacturing building will consist of Cisco 3750G-12S-E all-
fiber switches. The all fiber switches were chosen as they are simply used to aggregate the uplinks of
the access layer switches, and control routing/security between subnets at the office/building level. The
3750’s also make use of Cisco’s StackWise technology, which performs exactly the same function as the
FlexStack technology available on the 2960’s. Because of this we will again be able to combine multiple
switches into one allowing us to have dual stacks of redundant switches with uplinks into the datacenter
distribution layer switches.

The entire network team will be assigned to configuring these devices and their configuration will
consist of the following technologies/protocols:

HQ Manufacturing and Office Access Layer

- Combination of local and end-to-end VLANs will be used


- Layer 2 Class of Service will be enabled on the uplink ports to the distribution layers
- Cisco Catalyst 2960S switches will be stacked together for redundancy
- Voice VLANs will extend throughout the network
- Uplinks will be configured into EtherChannels into the distribution switches

HQ Manufacturing and Office Distribution Layer

- Will participate in OSPF routing – Area 1 and 2 - stub


- EtherChannel bundles will be fed up to the datacenter distribution switches
- EtherChannels will be cross stack channels increasing redundancy
- Access control, and Layer 3 QoS (leveraging DSCP) will be configured at this point
- Fiber-based switches

HQ Datacenter Core, Distribution, and Access


The HQ datacenter distribution switches will consist of dual Cisco Catalyst 4506-E’s with dual 6L-E
supervisor engines. They will also contain four 12 port, 10Gig-E line cards for uplink aggregation of
server farm and HQ manufacturing/office distribution switches. The Core switches will be dual Cisco
Catalyst 6509-E switches, with dual 720 supervisor engines, 2 distributed forwarding line cards, quad 8
port 10Gig-E line cards along with dual 48 port copper 1Gig-E line cards. The HQ datacenter server farm
access switches will be 6 Cisco Catalyst 3750X-48P-S, each with a 4 port 10Gig-E uplink module and hot-
swappable power supplies. Below are the configuration highlights of each switch layer

HQ Datacenter Core Switches

- Will participate in OSPF routing – Area 0


- EtherChannels will be used at every possible opportunity
- EtherChannels between sup modules will be configured
- No policing will be configured on these switches
- Layer 3 QoS will be configured on both switches
- Sup modules will be configured to be redundant, and traffic will failover to the secondary path
before the standby sup comes online

HQ Datacenter Distribution Switches

- Will participate in OSPF routing – Area 0


- EtherChannels will be used at every possible opportunity
- EtherChannels between sup modules will be configured
- Policing/Access Control will be configured here to control access between the HQ
manufacturing building and office buildings, as well as to the server farm.
- Will aggregate server farm uplinks, and HQ distribution switch uplink in EtherChannels
- Sup modules will be configured to be redundant, and traffic will failover to the secondary
path before the standby sup comes online

HQ Datacenter Access Switches

- Will participate in OSPF routing – Area 3 – stub


- Will utilize StackWise technology
- Cross-stack EtherChannels configured to each redundant distribution switch
- Private VLANs and ACL’s will control access between servers
- PoE+ will be available when necessary

Service Providers
High speed connections are required between HQ and all regional sites, for increased productivity, and
for offsite backups. It has been decided that Rogers and Hydro One will be used for all Eastern and
Maritime connectivity, while TELUS and Bell will be used on the west coast.

All service providers will be required to offer FI one single Layer 2 100Meg ethernet circuit, with a
guaranteed SLA of 95.9999% uptime. The service providers will also be providing Layer 2 Class of Service
over the WAN between sites, which we will then overlay with Layer 3 QoS.

Going back to FI’s largely remote staff, and web-presence, Rogers and Hydro One will be recruited to
deliver a 400Mbps internet circuit over the same CPE device that the L2 circuit is being delivered over.
The circuit will be required to support bandwidth increases as low as 10Mbps up to 100Mbps as needed,
in 10Mbps increments. The TELUS and Hydro One lines will also be used to propagate data to the
backup datacenter in Vancouver, BC. This will be done every 24 hours, at midnight, to avoid bandwidth
stress on the lines, as well as helping to keep speeds optimal. This will also allow Falcon Industries to
keep themselves up to date, should anything happen to their Mississauga based facility.

The backup datacenter in Vancouver will provide an exact replica of the Mississauga datacenter but will
not require Falcon Industries to make any upfront capital purchases, and will all be maintained by the
datacenter owners
Branch & Sales Offices
FI’s Branch offices will employ approximately 35 people (with the Toronto office being one of the local
offsite backup storage locations). In keeping with the demand for redundancy, all branch offices will be
connected together with the Layer 2 100meg services provided by the providers listed above. However
because of the low staff numbers and remoteness of the sales offices, they will be connecting into the
HQ over the internet with each sales office having one DSL and one cable connection. The branch offices
will be making use of dual Cisco 3945 ISR routers to terminate the L2 circuits and site to site VPN
tunnels, as well as 2 Cisco Catalyst 3560X-48P-S switches for LAN connectivity and routing. The branch
offices will also be equipped 3 Cisco 1142N WAP’s controlled by the central controllers at HQ.

The sales offices will be equipped with two Cisco 2921 ISR’s with 2 ADSL cards in each, and a single 48
port switch module with PoE for each router. The list below outlines the high points of the branch and
sales office gear.

Branch Offices

- The branch office routers will be running OSPF as a stub area – areas 5 – 9
- A default route on the router from OSPF will direct all traffic over the WAN
- The LAN switch will be configured with floating static routes pointing traffic to each router
- The LAN switch will be configured using sticky MAC address filtering to prevent
unauthorized access to the network
- No ACL’s will be applied to the network at each branch site, other than to restrict device
management access

Sales Offices

- The routers will be configured to run OSPF as stub areas


- Sales offices will be placed into each area that reflects their local branch office (they can
then be in any area between 5 and 9)
- Routers will have one single 48 port PoE+ enabled, switch module and 2 ADSL cards to
terminate the DSL lines for MLPPP
- DSL lines will be MLPPP enabled in order to match the bandwidth provided by the cable
connections

Wireless
Wireless is something that the customer has requested and to meet that request we will be leveraging
the 1142 AP’s at each site, along with two Cisco 4402 WLC’. The wireless networks will be available for
all corporate devices, and one single public VLAN will be available at every wifi-enabled location. WPA2
in conjunction with EAP-TLS will be the method of security for corporate wireless access, with WPA2
using a static password and captive portal for the public wireless. The list below highlights the
configuration of the WLC’s and WAP’s.
Wireless Configuration

- WPA2 + EAP-TLS enabled wireless networks – corporate access


- WPA2 + static key + captive portal for public access
- Rogue device detection enabled – no wireless access points will be allowed in the buildings
- WLC’s will be configured in a redundant fashion

Voice & Mobility


Falcon Industries has requested in their original RFP that they are interested in a unified voice/data
network. As such we will be utilizing our experience with Shoretel’s VoIP equipment to meet that
requirement. The datacenter will require 4 Shoretel T1K voice switches, to terminate the 4 T1 lines that
are provided by Rogers and Hydro One. The voice calls will be processed with 4 Shoretel Voice Switch
120’s. The datacenter itself will contain 4 Shoretel IP230G, while the HQ manufacturing building, office
building, and branch offices will also be outfitted with the IP230G phones.

The voice network will be configured in a manner that doesn’t require the voice switches to be 100%
operational at all times. Once calls have been established through the phones they are then handed off
to the handsets themselves to manage the calls, so if a voice switch goes down, there are 4 other at the
HQ and one at each branch site to take over.

Shoretel also has a mobile application that we will install on all of the corporate Blackberry’s in order to
extend the local corporate voice network out to the mobile sales people and mobile administrative staff.
Along with the Blackberry’s we will be installing Blackberry Enterprise Server to provide secure
corporate email and document access to the mobile staff of Falcon Industries.

The mobile staff laptops will be outfitted with 3G wireless cards to allow them to remotely access the
Juniper SA 4500 VPN appliances from wherever they are, as that will be their primary means of
communication. The Juniper will deploy VPN clients that will launch at the start-up of each machine
making the software think they are on site and ensures that the appropriate security settings are still
being pushed out to the users no matter where they are.

Security & Network Management


We will be providing FI with a centrally managed infrastructure consisting of several KVM switches with
built-in monitors and keyboards, as well as several Opengear console servers to manage all of the
datacenter network infrastructure. To meet the security, and management requirements of the
company for network devices we will also be installing several Syslog servers, monitoring servers, FTP
servers, and TFTP servers as well as a network portion dedicated to hosting these devices, which will
consist of two Cisco Catalyst 3560X-48P-S switches utilizing static routes rather than OSPF to provide
network reachability.

Network security will be enforced across the network, restricting access between departments and
servers as well as to network management interfaces and DMZ servers. Further security requirements
will be enforced using Active Directory Group Policies, Security Groups and domain trusts.
Virtualization
As huge supporters of virtualization we will be leveraging our knowledge of it to install a top of the line
virtualization environment, using Vmware vSphere server along with vCenter to provide a highly
available server infrastructure, based on 8 Dell Poweredge R810 servers. In keeping with the redundant
environment all virtual storage will be provided using four IBM DS3400 SANs packed with 6TB worth of
space, connected together with four Brocade Fiber Channel switches.

The storage/virtual infrastructure will be built with HA in mind. We will be configuring vCenter to make
use of vMotion, allowing virtual machines to migrate themselves between physical hosts seamlessly, and
automatically, resulting in almost zero downtime for servers. The highlights of the virtual infrastructure
are listed below.

Virtualization Configuration

- Dell servers will be configured with dual socket motherboards with 24GB of RAM or more
- Minimal local storage is allocated to the Dell servers, just enough for the vSphere
installation
- The SANs will be partitioned and RAID’ed using two RAID 10 logical drives, along with a
single RAID 5 logical drive on each SAN providing a high performing RAID drive for regular
application access, and one for high speed database access
- The FC switches will be configured to allow all servers to be in the same zone to ensure
access to the virtual machines from every server
- Cisco Nexxus 1000V virtual switches will be used on every physical host to provide enhanced
network security between the local VM’s

Workstations & Software


Due to Falcon Industries large population of manufacturing workers, we felt it would be a good
opportunity to make use of thin clients in that environment. We will be using Wyse’ C30LE series thin
clients to provide Terminal Services access for manufacturing employees where they can access their
web-based apps, productivity apps, and manufacturing applications. All of the remaining staff
(administrative and mobile) will be outfitted with either Dell Optiplex workstations of varying
performance (depending on their job) or Dell E-series laptops. All staff will be given 19” flat screen
monitors along with 30min UPS units as well.

Local software will consist of nothing more than anti-virus software and productivity software on each
user PC, as 90% of the remaining software will be delivered via the web, or terminal services.
Maintenance Staff and IT staff will be provided with help desk software on central servers to manage
user help requests

Anda mungkin juga menyukai