Anda di halaman 1dari 2

Data

center functional : safe & secure place for device, power, environment, connectivity. Fundamental design philosophy: • Powercost:memperhitungkansumberdaya
simplicity,flexibility,scalability, modularity, sanity. Top 10 DC design guideline: plan ahead, keep it simple,be flexible, think modular, worry • Asumsi yang digunakan adalah sebagai berikut: (1) Untuk switch dan NIC, digunakan linear power model dengan constant idle-time power
about weight, Label everything,Use RLU’s not square feet,Use aluminium tiles in the raised floor system, Keep things covered or bundled, danactive power.; dan Untuk CPU core,
and out of sight, Hope for the best, plan for the worst. Scope:based on what company need, determine by how much money available. perhitungan daya yang digunakan dengan menghitung peningkatan konsumsi daya dari CPU dan network interface terkait dengan paket yang
Budget: use project scope as starting point, be creative with budget.Location Criteria: Based on budget, connectivity and low natural dikirimkan.
risk.Essential criteria: physical capacity, power, cooling,connectivity.Secondary Criteria: lighting, plubming, wall, door, security hardware • Hasil:
etc. What equipment will the data center contain? What will be the RLU definitions? What are the required utility feeds? How many RLUs • Biaya Peralatan: Berdasarkan percobaan yang dilakukan, BCube (hybrid) memiliki biaya terendah dengan de Bruijn(server-based) menjadi
will be needed? What are the limiting factors?. Ciri infrastruktur adaptif: Efisien, efektif, fleksibel, minimasi kompleksitas, maksimasi utilitas. yang termahal dengan selisih 45% biaya dari BCube. Jika dengan skema pembiayaan shared-core, de Bruijn (hybrid) memberikan biaya
Maturity model : tahapan kemapanan sebagai landasan untuk mengevaluasi & merencanakan infrastruktur IT. terendah. Berdasarkan dari hasil tersebut diatas, kesimpulan sementara terkait dengan biaya peralatan yang dibutuhkan, arsitektur hybrid
Sasaran : ekonomi, daya tanggap, kualitas layanan, acuan pengembangan SDM. memberikan nilai biaya yang terendah dibandingkan dengan arsitektur lainnya, meskipun nilai rendah yang dihasilkan tidak terlalu signifikan.
RLU : konsep berapa banyak peralatan dapat ditampung dan kebutuhannyaà membantu desainer determine esential criteria • Biaya Pengkabelan: Dari perhitungan terkait dengan biaya pengkabelan, BCube (hybrid) memberikan nilai biaya yang terendah, dimana
Elemen DC : biaya yang diberikan adalah 1/3 dari biaya de Bruijn (server-based). Meskipun demikian, kompleksitas pengkabelan dapat pula menjadi faktor
Site, command center, cable management, network infrastructure, terminal server, environmental control, power dalam menentukan desain arsitektur yang digunakan untuk membuat sebuah data center.
Kriteria : Budget : availability, coverage, actual fund need, how fund distributed & redist., Pysical constraint: space, power req, • Biaya Konsumsi Sumber Tenaga: Hasil yang diperoleh berdasarkan biaya yang harus dikeluarkan terkait dengan kebutuhan sumber daya
cooling, bandwidth. System availability profiles, : categori, dev redundancy, power red, cooling red, network red.. Structural aspect dari peralatan yang digunakan, desain server-
: raised floor, location near with, pathway, floor load. Power : memadai, surge supression, grounding, cabling. Redundancy : UPS based memberikan nilai konsumsi daya terbesar dibandingkan arsitektur switch-based.
or genset.Networking : kesesuaian jenis cable dengan media,proper cable, overflow. Securty : phisical access, level access,
Kesimpulan pembiayaan (1) Dari sebuah arsitektur data center, biaya terkait dengan sumber daya dan pengkabelan memberikan kontribusi
monitoring. Expandability : RLU, PDU ( power ), HVAC, space fisik. Disaster prep : natural & human disasters.
pembiayaan yang relatif kecil terhadap biaya
ALASAN OPTIMASI DATA CENTER :1 DC menjadi self service, jika terdapat excess capacity, 2 DC tidak menjadi self service
jika telah mendekati batas kapasitasnya.3 Dalam pengembangan DC membutuhkan dana besar.4 Perlunya pengamanan DC dari total yang dibutuhkan oleh sebuah data center (10% - 16%); (2) Pada bagian dengan nilai utilitas tinggi dan menggunakan beban kerja yang
overload dan unbalance. Tren yang tejadi adalah “doubling storage capacity every 2 years”. “elastis”, akan memberikan menurunan biaya yang dibutuhkan.; dan (3) Dari perhitungan biaya yang dilakukan, server-based memberikan
INFRASTRUKTUR DC :Rack,Switch dan switch port,VLAN,Patch panel and cable,Power utilization and nilai biaya terbesar ( 40% lebih tinggi dibandingkan dengan biaya terendah yang diperoleh arsitektur lainnya
monitoring,Generator,High voltage power components, HVAC • Kesimpulan: Dari hasil perbandingan antar 3 arsitektur data center, yaitu switch-only, server-only dan hybrid, dapat diketahui bahwa
Cara mencegah overload : melakukan analisa akurat terkait dengan penggunaan sistem dan peletakannya di DC. Avaliability arsitektur hybrid merupakan arsitektur dengan biaya terendah, meskipun tidak dibarengi dengan keuntungan yang diperoleh. Hal ini
profile : disebabkan pada penelitian yang dilakukan, hasil yang diperoleh bergantung pada asumsi yang digunakan pada load pada masing-masing
Ensure avaliability à redundancy, power, cooling, connectivity arsitektur. Kedepannya, dengan semakin murahnya harga switch, tidak menutup kemungkinan desain arsitektur yang bersifat switch-based
Key design = determine risk dapat lebih unggul secara pembiayaan dibandingkan dengan server-based maupun hybrid. Meskipun demikian, server-based dapat pula lebih
Viability project :Inadequated budget, power, bandwidth,Retrofit problem(room height, cable routing,grounding),Better co-location unggul jika arsitektur yang digunakan pada server- based dapat dikembangkan lagi sehingga core CPU dapat berfungsi ganda,untuk komputasi
or ISP,Inadequated qualifying personel.Too remote location, Enviromental problem. dan network task. Berdasarkan hasil perbandingan yang dilakukan, terdapat beberapa faktor yang bersifat teknis maupun non-teknis yang
perlu untuk diperhitungkan juga, sehingga dapat memberikan keuntungan pada masing-masing arsitektur yang digunakan.
iaya Peralatan: Berdasarkan percobaan yang dilakukan, BCube (hybrid) memiliki biaya terendah dengan de Bruijn(server-based) menjadi yang
termahal dengan selisih 45% biaya dari BCube. Jika dengan skema pembiayaan shared-core, de Bruijn (hybrid) memberikan biaya terendah.
Berdasarkan dari hasil tersebut diatas, kesimpulan sementara terkait dengan biaya peralatan yang dibutuhkan, arsitektur hybrid memberikan
nilai biaya yang terendah dibandingkan dengan arsitektur lainnya, meskipun nilai rendah yang dihasilkan tidak terlalu signifikan.
• Biaya Pengkabelan: Dari perhitungan terkait dengan biaya pengkabelan, BCube (hybrid) memberikan nilai biaya yang terendah, dimana
biaya yang diberikan adalah 1/3 dari biaya de Bruijn (server-based). Meskipun demikian, kompleksitas pengkabelan dapat pula menjadi faktor
dalam menentukan desain arsitektur yang digunakan untuk membuat sebuah data center.
• Biaya Konsumsi Sumber Tenaga: Hasil yang diperoleh berdasarkan biaya yang harus dikeluarkan terkait dengan kebutuhan sumber daya
dari peralatan yang digunakan, desain server-
based memberikan nilai konsumsi daya terbesar dibandingkan arsitektur switch-based.
• Kesimpulan pembiayaan (1) Dari sebuah arsitektur data center, biaya terkait dengan sumber daya dan pengkabelan memberikan kontribusi
pembiayaan yang relatif kecil terhadap biaya
total yang dibutuhkan oleh sebuah data center (10% - 16%); (2) Pada bagian dengan nilai utilitas tinggi dan menggunakan beban kerja yang
“elastis”, akan memberikan menurunan biaya yang dibutuhkan.; dan (3) Dari perhitungan biaya yang dilakukan, server-based memberikan
nilai biaya terbesar ( 40% lebih tinggi dibandingkan dengan biaya terendah yang diperoleh arsitektur lainnya
• Kesimpulan: Dari hasil perbandingan antar 3 arsitektur data center, yaitu switch-only, server-only dan hybrid, dapat diketahui bahwa
arsitektur hybrid merupakan arsitektur dengan biaya terendah, meskipun tidak dibarengi dengan keuntungan yang diperoleh. Hal ini
disebabkan pada penelitian yang dilakukan, hasil yang diperoleh bergantung pada asumsi yang digunakan pada load pada masing-masing
arsitektur. Kedepannya, dengan semakin murahnya harga switch, tidak menutup kemungkinan desain arsitektur yang bersifat switch-based
dapat lebih unggul secara pembiayaan dibandingkan dengan server-based maupun hybrid. Meskipun demikian, server-based dapat pula lebih
unggul jika arsitektur yang digunakan pada server- based dapat dikembangkan lagi sehingga core CPU dapat berfungsi ganda,untuk komputasi
dan network task. Berdasarkan hasil perbandingan yang dilakukan, terdapat beberapa faktor yang bersifat teknis maupun non-teknis yang
perlu untuk diperhitungkan juga, sehingga dapat memberikan keuntungan pada masing-masing arsitektur yang digunakan.
Basic layout DC: raised floor untuk airflow dan cabling, Lorong dan ruang terbuka 40-50% data center adalah ruangan terbuka, command From Paper2: Server virtualzitaion alone cannot address the limitation, most DC network run o TCP with these limitations : no performance
center, power panel room,ruang HVAC, ruang jaringan, gudang dan ruang backup, staging room. isolation, increased sec risk, poor apps deployability, limited management flexibility, no support for network innovation.Form thse limitation,
From paper 1: CostComparison: technological improvement called virtualized network come arise. This NV is aimed to create multiple VN above existing shared pjysical
• Tujuan:membandingkan3jenisarsitekturDCuntukmendapatkanskalabilitasdanefisiensibiayaterbaikdenganpendekatanberbeda. network. These VN will be logically separated from each other thus make them can be implemented and managed independently and the
kontribusi:memberikanusulan estimasi dan metode perbandingan biaya atas beberapa arsitektu rdengan pendekatan berdasarkan limitation are now gone. Traditional DC conatained ofphysica servers, router and switch are adopting clos topology/ fattree. n this paper, we
performance leve ldan latency.Kemudian dilanjutkan dengan membandingkan berbagai arsitektur yang mewakilinya. present a survey of recent research on virtualizing data center networks. Our contributions are three-fold: first, we provide a summary of
• Terdapat 3 jenis, yaitu switch only (fat-tree), hybrid(BCube & de Bruijn) dan server only (de Bruijn). the recent work on data center network virtualization. Second, we compare these architectures and highlight their design trade- offs. Finally,
• Skenario: (1) Equalize latency àdatacenter disusun dengan path-length yang sama; (2) Equalize capacity àdatacenter diberikan kapasitas we point out the key future research directions for data center network virtualization. To the best of our knowledge, this work is the first to
yang sama untuk mengakomodir load sebesar survey the literature on virtualizing data center networks. A Virtualized Data Center is a data center where some or all of the hardware (e.g.,
10Gbps. (3) Count costàmelakukan perbandingan berdasarkan cost yang dibutuhkan untuk membangun sebuah DC berdasarkan servers, routers, switches, and links) are virtualized. A Virtual Data Center (VDC) is a collection of virtual resources (VMs, virtual switches,
pertimbangan estimasi harga pasar. and virtual routers) con- nected via virtual links. While a Virtualized Data Center is a physical data center with deployed resource virtualization
• Modelpembiayaan:Equipmentcost:switch10Gbps,NIC10Gbps,servercore&cable. techniques, a Virtual Data Center is a logical instance of a Virtualized Data Center consisting of a subset of the physical data center resources.
• Asumsi: (1) biaya yang dibutuhkan utk switch setara dengan jumlah port yang dimiliki; (2) jumlah core yang digunakan sepenuhnya utk A Virtual Network (VN) is a set of vir- tual networking resources: virtual nodes (end-hosts, switches, routers) and virtual links; thus, a VN is a
memproses traffic; (3) pada shared model, core part of a VDC. A network virtualization level is one of the layers of the network stack (application to physical) in which the virtualization is
diasumsikan dapat difungsikan utk tugas lain (4) kabel dengan jenis labor-centric. introduced. In Figure 4, we show how several VDCs can be deployed over a virtualized data center. Both network virtualization and data
center virtualization rely on virtualization techniques to partition available re- sources and share them among different users, however they N. NetShare : NetShare [27] tackles the problem of bandwidth allocation in virtualized data center networks proposing a statistical
differ in various aspects. While virtualized ISP (VNs) net- works mostly consist of packet forwarding elements (e.g., routers), virtualized data multiplexing mechanism that does not require any changes in switches or routers. NetShare allocates bandwidth for tenants in a
center networks involve different types of nodes including servers, routers, switches, and storage nodes. Hence, unlike a VN, a VDC is proportional way and achieves high link utilization for infrastructure providers.
composed of different types of virtual nodes (e.g., VMs, virtual switches and vir- tual routers) with diverse resources (e.g., CPU, memory and Comparison : scability, Second- Net, Seawall, and Gatekeeper achieve high scalability by keeping states at end-hosts (e.g., hypervisors) rather
disk). In summary, data center network virtualization is different from ISP network virtualization, because one has to consider different than in switches. NetLord and VL2 achieve high scalability through packet encapsulation maintaining the forwarding state only for switches in
constraints and resources, specific topologies, and degrees of scalability. Specifically, one of the differences between the traditional the network. Diverter is also scalable, because its switch forwarding table contains only MAC addresses of the physical nodes (not those of
networking model and network virtualization model is par- ticipating players. In particular, whereas the former assumes that there are two VMs). On the other hand, SPAIN, VICTOR, and CloudNaaS are less scalable because they require maintaining a per-VM state in each switch-
players: ISPs and end-users, the latter proposes to separate the role of the traditional ISP into two: an Infrastructure Provider (InP) and a ing/forwarding element.Fault tolerance: We found that most of the architectures were robust against failures in data plane components. For
Service Provider (SP). Decoupling SPs from InPs adds opportunities for network innovation since it separates the role of deploying networking instance, SecondNet uses a spanning tree signalling channel to detect failures, and its allocation algorithm to handle them. A SPAIN agent can
mechanisms, i.e., protocols, services (i.e., SP) from the role of owning and maintaining the physical infrastructure (i.e., InP). switch between VLANs in the occurrence of failures, NetLord relies on SPAIN for fault-tolerance, and VL2 and NetShare rely on the routing
In the context of data center virtualization, an InP is a company that owns and manages the physical infrastructure of a data center. An InP protocols (OSPF). Diverter, VICTOR, and SEC2 employ the underlying forwarding infrastructure for failure recovery. Schemes such as Oktopus
leases virtualized resources to multiple service providers/tenants. Each tenant creates a VDC over the physical infrastructure owned by the and CloudNaaS handle failure by re-computing the bandwidth allocation for the affected network. Schemes including Seawall and Gate- keeper
InP for further deploy- ment of services and applications offered to end-users. Thus, several SPs can deploy their coexisting heterogeneous can adapt to failures by re-computing the allocated rates for each flow. Deployability: We summarize our detailed comparison with respect to
network architectures required for delivering services and applications over the same physical data center infrastructure. A forwarding de- ployability in Table VIII, which describes the required features to be implemented in hypervisors (on physical machines), edge switches,
scheme specifies rules for sending packets by switching elements from an incoming port to an outgoing port. To support relative bandwidth and core switches. Commodity switches support mainly L2 forwarding and VLAN technology whereas Com- modity hypervisors create only
sharing, congestion- controlled tunnels [6] may be used, typically implemented within a shim layer that intercepts all packets entering and isolated VA forwarding scheme specifies rules for sending packets by switching elements from an incoming port to an outgoing port. A FIB
leaving the server. Each tunnel is associated with an allowed sending rate on that tunnel implemented as a rate-limiter. One of the techniques allows to map MAC address to a switch port when making a decision about packet forwarding.Ms. The table also shows, which scheme requires
to achieve bandwidth guarantee is the use of rate-limiters. The main multipathing mechanisms used in data center networks are ECMP (Equal a centralized management server. Depending on the scheme, this server can have different functionalities such as address management
Cost Multipathing) [29] and VLB (Valiant Load Balancing). o achieve load balanc- ing, ECMP spreads traffic among multiple paths that have (Portland, VL2), tenants man- agement (NetLord and VL2), routing computation (VICTOR and SEC2), and resource allocation (SecondNet).Qos
the same cost calculated by the routing protocol. VLB selects a random intermediate switch that will be responsible for forwarding an Support : QoS in virtual networks is achieved by allocating guaran- teed bandwidth for each virtual link. Oktopus, SecondNet, Gatekeeper, and
incoming flow to its corresponding destination. ECMP and VLB are implemented in L3 switches. CloudNaaS provide guaranteed bandwidth allocation for each virtual network. On the other hand, Seawall and NetShare provide weighted
A. Traditional DC : irtualization in current data center architectures is com- monly achieved by server virtualization, The main fair-sharing of bandwidth among tenants; however, they do not provide guaranteed bandwidth allocation meaning that there is no predictable
limitation of current data center architectures is scalability since performance. Whereas the remaining architectures do not discuss QoS issues. Load Balancing: Load-balancing is a desirable feature for
commodity switches were not designed to handle a large number of VMs reducing network congestion while improving network resource availability and application performance. Among the architectures surveyed
and the resulting amount of traffic. In particular, switches have to maintain in the paper, SPAIN and NetLord (which relies on SPAIN) achieve load-balancing by distributing traffic among multi- ple spanning trees. To
an entry in their FIBs (Forwarding Information Base) for every VM, which achieve load balancing and realize multipathing, Portland and VL2 rely on ECMP and VLB. Lastly, Diverter, VICTOR, and SEC2 are essentially
can dramatically increase the size of forwarding tables. address- ing schemes that do not explicitly address load-balancing.
B. Diverter : Diverter is implemented in a software module (called VNET) Our comparison of different proposed architectures reveal several observations. First, there is no ideal solution for all the issues that should
installed on every physical machine. When a VM sends an Ethernet frame, be addressed in the context of data center network virtualization. This is mainly because each architecture tries to focus on a particular
VNET replaces the source and the destination MAC addresses by the ones aspect of data center virtualization. On the other hand, we believe that it is possible to combine the key features of some of the architectures
of the physical machines that host the source and the destination VMs, to take advantage of their respective benefits. For example, it is possible to combine VICTOR and Oktopus to deploy virtu- alized data center
respectively. with bandwidth guarantees while providing efficient support for VM migration. Second, finding the best architecture (or combination)
C. NetLord : NetLord [22] is a network architecture that strives at scalability requires a careful understanding of the performance requirements of the applications residing in the data centers. Thus, the issues discussed
of tenant population in data centers. The architecture virtualizes L2 and L3 in this section require further research efforts in the context of different cloud environments. Future research: virtualized edge DC, virtual
tenant address spaces, which allows tenants to design and deploy their own DC embeddeing, programmability, network performance guarantee, data center management, security, pricing. Conclusion : Data centers
address spaces according to their needs and deployed applications. have become a cost-effective infrastructure for data storage and hosting large-scale network applications. However, traditional data center
D. Victor : The main idea of VICTOR shown in Figure 6 is to create a cluster network architectures are ill-suited for future multi-tenant data center environments. Virtualization is a promising technology for designing
of Forwarding Elements (FE) (L3 devices) that serve as virtual line cards with scalable and easily deployable data centers that flexibly meet the needs of tenant applications while reducing infrastructure cost, improving
multiple virtual ports of a single virtualized router. Thus, the aggregation of FEs performs data forwarding for traffic in a network. management flexibility, and decreasing energy consumption.
E. VL2: VL2 is based on a non-oversubscribed Clos topology (see Figure 2) that provides easiness of routing and resilience. Packets are In this paper, we surveyed the state of the art in data center network virtualization research. We discussed the proposed schemes from
forwarded using two types of IP addresses: location-specific addresses (LAs) and application-specific ad- dresses (AAs) used by switches different perspectives, highlighting the trends researchers have been following when designing these archi- tectures. We also identified some
and servers, respectively. of the key research directions in data center network virtualization and discussed potential approaches for pursuing them.
F. Portland : he main idea of PortLand is to use a hierarchical Pseudo MAC (PMAC) addressing of VMs for L2 routing. In particu- lar, a PMAC Although current proposals improve scalability, provide mechanisms for load balancing, ensure bandwidth guarantees, there are challenging
has a format of pod.position.port.vmid, where pod is the pod number of an edge switch, position is its position in the pod, port is the and important issues that are yet to be explored. Designing smart-edge networks, providing strict performance guarantees, devising effective
port number of the switch the end-host is connected to, and vmid is the ID of a VM deployed on the end host. business and pric- ing models, ensuring security and programmability, supporting multi-tiered and multi-sited data center infrastructures,
G. SEC2 : Network virtualization is supported through Forwarding Elements (FEs) and a Central Controller (CC). FEs are essentially Ethernet im- plementing flexible provisioning and management interfaces between tenants and providers, and developing efficient tools for managing
switches with the ability to be controlled from a remote CC that stores address mapping and policy databases. virtualized data centers are important directions for future research.
H. SPAIN : Smart Path Assignment In Networks (SPAIN) [20] uses the VLAN support in existing commod- ity Ethernet switches to provide
multipathing over arbitrary topologies.
I. Oktopus [16] is the implementation of two virtual network abstractions (virtual cluster and virtual oversubscribed cluster) for

controlling the trade- off between the performance guarantees offered to tenants, their costs, and the provider revenue. Oktopus not
only in- creases application performance, but offers better flexibility to infrastructure providers, and allows tenants to find a balance
between higher application performance and lower cost.
J. SecondNet : SecondNet [15] focuses on providing bandwidth guarantees among multiple VMs in a multi-tenant virtualized data center.
In addition to computation and storage, the architecture also accounts for bandwidth requirements when deploying a VDC.
K. Gatekeeper: Gatekeeper focuses on providing guaranteed bandwidth among VMs in a multiple-tenant data center, and achieving a high
bandwidth utilization. In general, achieving a strict bandwidth guarantee often implies non effective utilization of a link bandwidth when
free capacity becomes available. Gatekeeper addresses this issue by defining both the minimum guaranteed rate and maximum allowed
rate for each VM pair.
L. CloudNaas : CloudNaaS [7] is a virtual network architecture that offers efficient support for deploying and managing enterprise ap-
plications in clouds. In particular, the architecture provides a set of primitives that suit the requirements of typical enterprise
applications including application-specific address spaces, middlebox traversal, network broadcasting, VM group- ing, and bandwidth
reservation.
M. Seawall : Seawall enforces bandwidth isolation among different ten- ants, and prevents malicious tenants from consuming all network
resources. Besides, Seawall requires that a physical machine maintains state information only for its own entities, which improves
scalability.