Anda di halaman 1dari 40

In-memory key value stores for NFV

applications

Project Dissertation - I

Submitted in partial fulfillment of the requirements


for the degree of

Master of Technology
by
Dave Jashkumar
(Roll no. 153050004)

Supervisor:
Prof. Mythili Vutukuru

Department of Computer Science And Engineering


Indian Institute of Technology Bombay
Mumbai 400076 (India)

16 October 2016
Declaration

I declare that this written submission represents my ideas in my own words and where
others’ ideas or words have been included, I have adequately cited and referenced the
original sources. I declare that I have properly and accurately acknowledged all sources
used in the production of this thesis.
I also declare that I have adhered to all principles of academic honesty and integrity
and have not misrepresented or fabricated or falsified any idea/data/fact/source in my
submission. I understand that any violation of the above will be a cause for disciplinary
action by the Institute and can also evoke penal action from the sources which have thus
not been properly cited or from whom proper permission has not been taken when needed.

Dave Jashkumar
(153050004)

Date: 16 October 2016

i
Abstract

With rapid advancement in manufacturing and digital technology, electronic devices are
becoming more and more affordable for everyone. All these technological advancement
has exposed many new business opportunities and everyone wants to explore these op-
tions. All this has lead to huge increase in network traffic, causing heavy demand on
network functions. But the pace at which network function hardware is evolving is not
capable to meet the demand. One of the promising way to meet the required demand, is to
scale these network function horizontally in virtualized environment and have a centrally
shared state store. In this report we will look at the requirements of such a state store,
which all key value stores satisfies these requirements and performance of few in-memory
key value stores with example application. We have selected Redis [21], RAMCloud [19]
and LevelDB [12] as key value store for this purpose and have evaluated their perfor-
mance in virtual environment. We have used NFV based LTE-EPC [23] as an example
application and have evaluate the applicability and performance of these key value store
as central state store for scaled version of LTE-EPC [24].

ii
Table of Contents

Abstract ii

List of Figures v

List of Tables vi

1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Approaches to scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Vertical Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Horizontal Scaling . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 4
2.1 Data Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 SQL based Data Stores . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.2 NoSQL based Data Stores . . . . . . . . . . . . . . . . . . . . . 5
2.2 LTE-EPC Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Literature Survey 8
3.1 Key Value Store Feature Comparison . . . . . . . . . . . . . . . . . . . . 8

4 Application of In-Memory Key Value Store for scaling the NFV based LTE-
EPC 10
4.1 Distributed LTE-EPC architecture . . . . . . . . . . . . . . . . . . . . . 10
4.2 Requirement analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Shortlisted key-value stores . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.4 Interface Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.4.1 Interface description . . . . . . . . . . . . . . . . . . . . . . . . 13

iii
Table of Contents

5 Evaluation 16
5.1 Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.2 Key Value Store Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.2.1 Performance Test . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2.2 Scalability Test . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2.3 Distributed scaling of key value stores . . . . . . . . . . . . . . . 24
5.3 Evaluations when pluged with LTE-EPC . . . . . . . . . . . . . . . . . . 26

6 Conclusion & Future Work 29

References 30

Acknowledgements 33
List of Figures

2.1 LTE-EPC Architecture. Source : Pratik Satapathy [24]. . . . . . . . . . . 6

4.1 Distribute LTE-EPC Architecture. Source : Pratik Satapathy [24]. . . . . 10

5.1 Read throughput with increasing number of client threads. . . . . . . . . 18


5.2 Read latency with increasing number of client threads. . . . . . . . . . . 18
5.3 Write throughput with increasing number of client threads. . . . . . . . . 19
5.4 Write latency with increasing number of client threads. . . . . . . . . . . 19
5.5 RAMCloud read CPU utilization when client and server VMs are on sep-
arate machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.6 RAMCloud read throughput when client and server VMs are on separate
machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.7 RAMCloud read latency when client and server VMs are on separate ma-
chines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.8 Scaling of reads with increasing number of CPUs on single virtual machine. 23
5.9 Scaling of writes with increasing number of CPUs on single virtual machine. 23
5.10 Read throughput of RAMCloud and Redis with varying VM instances. . . 25
5.11 Write throughput of RAMCloud and Redis with varying VM instances. . 25
5.12 Throughput of Distributed LTE-EPC with different data stores, Source :
Pratik Satapathy [24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.13 Latency of Distributed LTE-EPC with different data stores, Source :
Pratik Satapathy [24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

v
List of Tables

3.1 Key Value Store Feature Comparison. . . . . . . . . . . . . . . . . . . . 9

5.1 Server Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16


5.2 Virtual Machine Configuration. . . . . . . . . . . . . . . . . . . . . . . . 17
5.3 Data store configuration for performance test . . . . . . . . . . . . . . . 17
5.4 Data store configuration for use in LTE-EPC . . . . . . . . . . . . . . . . 26

vi
Chapter 1

Introduction

In this report we will look at the performance of different in-memory key value stores,
features provided by them and their applicability as central state store for scaling NFV
application.

1.1 Motivation
As the technology is growing, we are evolving into digital world, everything around us
is being digitized. We are moving from physical money to digital money, from analog
watches to smart watches, from local shopping to online shopping, and now Internet of the
Things (IoT). All this has lead to exponential increase in network traffic. Adding to this
their is also an increasing in demand for cloud computing and cloud storage. Per Gigabyte
disk cost has reduced significantly, from $11.00/GB in the year 2000 to $0.019/GB in the
year 2016 [5]. Networks are becoming faster and faster, latency is going down with
technologies like RDMA (Remote Direct Memory Access), etc. But there has not been
any significant improvement in computing power of CPUs since last few years. This has
possessed many difficulties in increasing the capacity of network functions (NF) and thus
there is need for scaling NFs. NFs are currently implemented on proprietary hardware
and thus it is not easy to scale them. Frequent upgrades would be costly and old hardware
is not reusable. One of the promising way to overcome this problem is to use network
function virtualization (NFV)1 , where network functions are implemented as software,
and are deployed over virtual machine (VM) running on general purpose hardware. This
make the NFV portable and cost effective. But VFNs are still running on hardware, and
the hardware is limited by it’s capacity. But since NFVs are implemented as a piece of
1
NFV are also refereed to as virtual network functions (VNF), and these terms are used interchangeably
in this text.

1
1.2 Approaches to scaling

software, there are many open source implementations available. This make the task of
scaling and customizing the functionalities of VNFs easy. Let’s look at the approaches to
scale VNFs.

1.2 Approaches to scaling


1.2.1 Vertical Scaling

Vertical scaling is done by adding more powerful components, such as faster CPUs,
RAM, etc. Increasing the capacity of single server is limited by availability of high end
hardware. The rate at which the hardware capacity increases is very slow, and is not
capable to meet the rate at which demand increases. Thus we can’t rely only on vertical
scaling.

1.2.2 Horizontal Scaling

Horizontal scaling, in contrast to vertical scaling uses multiple instances of similar


capability components (such as multiple CPUs) to increase the serving capacity. These
collection of components must give single component view to it’s end user. To do so, each
component must know about the state of other components and share it’s own state with
others. States can be shared between components either via direct communication or via
shared state space.

Direct Communication

In direct communication approach, each component shares it’s state with other component
directly in a mesh form. But the number of communication links increases very fast with
the number of components and thus the complexity.

Shared State Space

In this approach all components store their state centrally at one place and each component
can access the state of any other component from this shared state space. Since any
component can access the state of any other component, they should avoid storing the
state locally to avoid inconsistency. This compulsion of storing the state centrally may
lead to high processing delays and thus the central state store must be fast enough to serve
each request. One of the way to realize this shared state space is, to use a centralized data
store. One of the promising candidate for such a centralized data store is in-memory key
value stores.
1.3 Contribution

Discussing and comparing the advantages and disadvantages of different scaling


approaches is out of the scope of this report. For more details on scaling approaches please
refer [25] . For this report we will stick to shared state space based horizontal scaling
approach. Further we will look at the requirements imposed by shared state space and
how in-memory key value stores satisfies them. Further we will look at the performance
of few in-memory key value stores viz. Redis [21], RAMCloud [19] and LevelDB [12] as
central state store for Distributed LTE-EPC [24].

1.3 Contribution
The following contribution are made by stage 1 of this project.

1. Feature based comparison of few well known and recent key value stores.

2. Performance comparison of Redis [21], RAMCloud [19] and LevelDB [12] in vir-
tualized environment.

3. Evaluation of Distributed NFV based LTE-EPC [24] when scaled with the help of
Redis, RAMCloud and LevelDB2 .

2
Evaluations are performed along with Pratik Satapathy [24]
Chapter 2

Background

As seen in previous chapter, one of the way to realize shared state space is to use data
stores. Let’s look at what are data stores and their types.

2.1 Data Stores


A data store is a repository of data items with an interface, to manage these data items. A
data store can be as simple as a map in memory to as complex as feature rich database.
They may or may not store data persistently. They may provide a simple get/put interface
or may provide powerful query language. Based on the features provided by data stores,
they are broadly categorized into SQL based data stores and NoSQL based data stores.

2.1.1 SQL based Data Stores

These are generally complex data stores with features like atomicity, consistency,
durability, persistence, high availability & integrity. They generally uses relational data
model and provides a way to create relation among data items. They may also have
integrity checkers, to check the consistency of these relations, invoked after each of the
operation. Most of these data stores also provides us with transaction processing feature.
All these data stores provides us with a query language widely refereed to structured query
language (SQL), which is an interface to access data. SQL provides users with very rich
access features, such as selecting data which satisfies a give condition, or to access data
from multiple tables related by some key, etc. All these features makes the design of these
data stores complex and have effects on access time. Few well known examples of data
stores falling under this category are MySQL, Oracle 11g, Microsoft Access, etc

4
2.2 LTE-EPC Architecture

2.1.2 NoSQL based Data Stores

These are fairly simple data stores, and provides very basic access features like get,
put, delete, etc. Most of these data stores are modeled as key value pair stores and thus are
also commonly refereed to as key value stores. Other type of NoSQL based data stores are
document stores and column stores. NoSQL based data stores generally doesn’t provides
a way to set relations among data items, and provides limited number of data types, and
no domain constrains. Thus integrity checks are to be performed on client side, to keep
the data consistent. Though these data stores doesn’t provides us with complex integrity
checks, they do provide us with limited consistency and integrity checks. Many of these
data stores also provides us with atomic operations like increment, decrement, string ap-
pend and compare & swap. Few of these data stores also provides the functionality of
transaction processing. Due to their design simplicity and limited features, they gener-
ally have much lower access latency as compared to their SQL based counterparts. Few
of the well known examples are Redis, Memcached, Apache Cassandra, Oracle NoSQL
Database, etc. One of the specialization of these NoSQL based data stores is in-memory
key value stores, which we will be looking at in detail in this report.

In-Memory Key Value Stores

These are specialized NoSQL based data stores which stores all it’s data in memory. They
may store data on disk to provide persistence, but all the data items are always stored in
memory alongside. These data stores are generally designed to provide faster access and
lower access latency.

2.2 LTE-EPC Architecture


Long-Term Evolution (LTE) is a standard for high-speed wireless communication for
mobile phones and data terminals [17], widely used by 4G service providers. Based on
the functionality, LTE architecture is broadly divided into two major parts viz. Evolved
Universal Terrestrial Radio Access Network (E-UTRAN)1 and the Evolved Packet Core
(EPC). Figure 2.1 shows the typical LTE-EPC architecture.

1. Evolved Universal Terrestrial Radio Access Network:


E-UTRAN consists of mobile towers (Evolved NodeBs) and is responsible for man-
1
E-UTRAN is also referred to as RAN
2.2 LTE-EPC Architecture

Figure 2.1: LTE-EPC Architecture. Source : Pratik Satapathy [24].

aging the radio resources of the network, and transferring data between user equip-
ment (UE)2 and EPC.

2. Evolved Packet Core:


EPC consists of the following major components, viz. Home Subscriber Server
(HSS), Mobile Management Entity (MME), Serving Gateway (SGW), Packet Data
Network Gateway (PGW) and Policy Control Rules Function (PCRF). It is mainly
responsible for authenticating user, transferring the data traffic from RAN to inter-
net and vise verse, and stores the accounting information.

Typical procedures handled by EPC are attach, detach, tracking area updates, paging
and handover in control plane and data transfer in data plane. Invocation to any of these
procedures results into the use or change of state information of one or more of the EPC
components. Let’s look at the some of the LTE-EPC procedures of our interest, which
updates and/or accesses the component’s state information3 .

1. Attach/Reattach:
Whenever a UE turns on or tries to send a packet from idle mode, it first gets
2
User Equipment(UE) is also commonly refereed to as mobile device, user device, or just device and
these terminologies are used interchangeable in this text.
3
Note that the EPC procedures described here are referred from open source implementation of LTE-
EPC by N.S. Sadagopan [23], but must be similar for other implementation.
2.2 LTE-EPC Architecture

authenticated by MME. During authentication MME sends the UE’s identity infor-
mation (IMSI) to HSS, HSS accesses it’s state to get the authentication information
about that UE, and sends this information to MME. Based on the authentication
information received from HSS, MME sends a authentication request to UE, and
compares the UE’s response with expected response. On successful authentication,
MME negotiates with UE on security protocols to be used for this session. Next
MME generates a unique Tunnel Endpoint Identifier (TEID) for this session and
sends it with create session request to SGW. SGW in turn sends a create session
request to PGW. During all these steps MME, SGW and PGW generates various
session information and updates their state. On successful creation of session tun-
nels, acknowledgments and other required information are sent in reverse order and
UE migrates to active mode. Next the barrier is setup between RAN and PGW
via SGW. For more details please refer section 2.2.1 of N.S. Sadagopan’s work on
LTE-EPC [23].

2. Data Transfer:
Once the device gets attached, it can send data to internet. Data packet follows the
path from device to RAN, to SGW, to PGW, to internet and visa verse. All the ses-
sion information that was created during attach request, is now used for forwarding
the data packets.

3. Detach:
On detach request from device, a reverse of attach procedure is followed, MME
sends a delete session request to SGW, which in turn sends a delete session request
to PGW, and acknowledgments are forwarded on reverse path. All the information
created for this session is then deleted by their respective owners.
Chapter 3

Literature Survey

There are few survey papers such as Workload Analysis of a Large-Scale Key-Value Store
[2] which compares the performance of Memcached [10] under different scenarios, Scal-
able SQL and NoSQL Data Stores [4] compares the scalability of SQL vs NoSQL data
stores, Quantitative Analysis of Consistency in NoSQL Key-Value Stores [15] compares
the consistency of NoSQL based key value stores. But none of these compares the per-
formance of key value stores from the perspective of their use as central state store for
scaling NFV applications. One of the paper which stands near to this work is Experi-
mental Evaluation of NoSQL Databases [26], but it doesn’t tests the performance of these
stores in distributed or virtualized environment. None of the papers to my knowledge
compares the in-memory key value stores, in distributed and virtualized environment, and
most of these papers were published on or before the year 2014. Since then there has been
many improvements in the field of key value stores, such as addition of cluster support
to Redis [21], development of MICA [14] and introduction of new techniques like FaRM
[8]. Though each implementation of key value store gives their performance benchmark,
only few of these compares their performance with others. Also there are few blogs like
Antirez [1], Dormando [7], etc which compares the performance of key value stores, but
again these blogs also doesn’t considers the testing of the key value stores in distributed
and virtualized environment. This project is focused on comparing the performance of in-
memory key value stores form the perspective of their use as central state store for NFV
application.

3.1 Key Value Store Feature Comparison


Though I have not analyzed the requirements of variety of NFV systems that can be scaled
using central state store approach. But from the knowledge of my work with NFV based
LTE-EPC, the general requirements from data store, to scale any NFV system would be

8
3.1 Key Value Store Feature Comparison

Table 3.1: Key Value Store Feature Comparison.


Data Scalability/
Distributed Durability Consistency
In-memory Concurrency
Single Yes
Redis [21] Yes Customizable Tunable
Threaded Since v3.0
RAMCloud [19] Yes Yes Yes Yes Strong
Casandra [11] Cached Yes Yes Tunable Tunable
No
MICA [14] Yes Yes No N.A.
Persistence
LevelDB [12] Cached Yes No Yes N.A.
No
LevelMem DB [12] Yes Yes No N.A.
Persistence
Client No No
Memcached [10] Yes1 Yes
Based Persistence Replication
LMDB [16] Yes Yes No Yes N.A.
BoltDB [3] Yes Yes No Tunable N.A.
Berkeley DB [18] Cached Yes Yes Yes Strong
SLIT [13] Cached Yes No No N.A.
Hyperdex [9] Cached Yes Yes Yes Strong
Dynamo [6] Tunable Yes Yes Tunable Tunable
Rocks DB [22] Tunable Yes No Yes N.A.
No
Mega KV [27] Yes Yes No N.A.
Persistence
FaRM [8]2 Yes Yes Yes Yes Yes

low latency, consistency, scalability, and durability; and transactions in some cases. Since
the NFV systems in scaled design would be stateless, they would require frequent access
to data store and thus low latency and scalability are the key requirements of such a data
store. Since the NFV systems can have a variety of state ranging from transient to per-
sistent, durability may or may not be a need. While for some NFV systems, transactions
may be a need. These requirements will be different for different systems and thus having
a single data store that satisfies the requirement of all would be difficult. Let us quickly
look at feature based comparison of few famous key value stores. Due to space constraints
only few key features are shown in the table 3.1.
1
Memcached evicts table entries in LRU order if the table is full
2
FaRM is a technique for remote memory access but demonstrates an example of key value store
Chapter 4

Application of In-Memory Key Value


Store for scaling the NFV based
LTE-EPC

4.1 Distributed LTE-EPC architecture


The Distributed LTE-EPC architecture used here is an extension of open source imple-
mentation of NFV based LTE-EPC architecture [23] by my colleague Pratik Satapathy in
his thesis work [24]. Figure 4.1 shows the Distributed LTE-EPC architecture.

Figure 4.1: Distribute LTE-EPC Architecture. Source : Pratik Satapathy [24].

10
4.2 Requirement analysis

Each component of Distributed LTE-EPC is horizontally scaled using shared state


space approach. The shared state space is realized by using in-memory key value store.
All replicas of same component are always connected to same in-memory key value store.
While different component replicas may or may not be connected to same key value store.
Each replica of any EPC component is preceded by a load balancer, which gives the single
component view to it’s user. All the LTE-EPC procedures described in chapter 2 remains
same, except that instead of storing any state locally, each replica will now store it’s state
in their respective key value store.

4.2 Requirement analysis


Let’s look at the requirement of each LTE-EPC component.

1. HSS:
HSS is responsible for storing the authentication information about all the network
users and thus this data must be persistent. Most of the time spent by HSS is spent
in responding to MME’s fetch queries during attach request, and thus data store
serving HSS must have faster reads and low read latency. Writes to HSS are only
when a new user is added or an old user is deleted and thus writes to HSS are
usually very low, and can be marginally slow but must be durable. Thus the data
store serving HSS must have the following properties persistence, durability and
low read latency.

2. MME:
MME is mostly involved in serving UE’s attach and detach request, thus the state
stored by MME is UE’s session state. At attach MME creates and saves the UE’s
session state in data store, and during detach, it accesses the data store, sends the
detach request to SGW and deletes the respective UE’s session state. Since MME
only stores the UE’s transient session state, we don’t require durability and persis-
tence can also be compromised. In case of failure of data store with no durability,
only recent few UEs will be required to be reattached, and in case of no persistence,
all the UEs served by failed data store replica will be required to be reattached. This
may be a huge overhead, and can create a peak demand at MME, thus persistence
is a desired feature. But then in case of persistent store, the time required to recover
UE state form disk must be less than the reattach time of all the UEs, otherwise
there is no point in recovery. Since the number of reads and writes done by MME
are almost equal in number, both reads and writes must be served fast enough by the
4.3 Shortlisted key-value stores

data store. So the desired features for MME’s data store are, fast reads and writes,
and persistence with fast recovery.

3. SGW:
SGW is mostly responsible for managing tunnels between RAN and SGW and
SGW and PGW. It saves the tunnel information for the ongoing UE sessions. Re-
quirements by SGW are similar to that of MME, since it also saves the transient
tunnel information, which only persists as long as UE’s session persists. Except this
SGW also serves at data plane and thus accesses the tunnel information frequently,
but these access can be limited by caching the tunnel information at serving SGW
replica, as is done in Distribute LTE-EPC [24] design. Thus desired features for
SGW’s data store are, fast reads and writes, and persistence with fast recovery.

4. PGW:
Services provided by PGW are very much similar to the SGW services, it also stores
the tunnel information during UE’s session and is responsible for communicating
with outside world. Thus desired features for PGW’s data store are same as that of
SGW’s data store.

Since all replicas for any component are identical, any request can be served by any
replica. Say attach request for UE with IMSI 1 was served by MME1, but the detach
request for the same UE may be served by any other MME replica, say MME2. Thus the
state view among all replicas must be same, and thus consistency is a requirement. This is
true for all components and thus all data stores used must have consistency a feature. Also
as the EPC scales, the data store must also scale and thus any data store that is used for
scaling LTE-EPC, must also be distributed and scalable. Other properties that are required
from data store are, availability and fault tolerance.

4.3 Shortlisted key-value stores


Now that we have all the requirements listed, we can choose the required set of data
store based on the feature study done in chapter 3. From the requirement list and feature
list comparison, RAMCloud Storage System [19] is best fit for all the EPC components.
RAMCloud is scalable and distributed, has low read and write latency, stores data persis-
tently and has consistent view across all replicas. Another data store, we will be looking
at is Redis [21]. It has tunable persistence, tunable consistency, is distributed and claims
low read and write latency. Next we will also look at the performance of LevelDB [12] as
4.4 Interface Design

shared data store, though it is not distributable, it provides low read latency and it can be
used for HSS.

4.4 Interface Design


To test the performance of Distributed LTE-EPC under the presence of different data
stores, we would have required to change the EPC components code every time we change
the data store. So to make the integration of different data stores easy with the EPC
components, an interface was designed. All EPC components now makes call to it’s data
store only via this interface. This makes the task of changing the underling data store
simple, and doesn’t requires any change in LTE-EPC code. Everytime we need to change
the data store, we just switch the implementation library.

4.4.1 Interface description

To keep the design simple, interface exposes only required features of any data store.
It consists of the following classes :

• KVStore
This class exposes the functions to put, get and delete data from the data store. It
has the following functions:

– bool bind(string connection, string tablename):


It takes first parameter as a string describing the socket address of key value
store and second parameter as table name. It connects to data store and creates
the table with the give table name or binds to the table described by table name,
if it already exists. It returns true if the connection was successful and it was
able to bind with given table, otherwise false. All the other operations called
on this object will now operate on attached table.

– KVData put(KeyType key, ValType val):


This functions takes in a key and value as it’s parameters, and pushes it to
attached table in the key value store. Key and value can be of any type. It
returns an object of KVData class.

– KVData get(KeyType key):


This functions takes in a key as input and fetches it’s corresponding value form
the attached table. Key can be of any type and it returns an object of KVData
class.
4.4 Interface Design

– KVData del(KeyType key):


This functions deletes the value corresponding to given key form the attached
table and returns an object of KVData class.

– bool clear():
This functions deletes all the data from attached table, and thus must be used
with caution. It returns true if it was successful in removing all entries, other-
wise false.

• KVData
This class is designed to return the response of any operation. It has the following
data members:

– ierr :
Represents the integer code of an error. It’s value is negative if any error
occurred, otherwise it is zero.

– serr :
It gives the description of in string form for human readability.

– value :
It is the return value of an operation, if no error has occurred.

• KVRequest
This class is designed to send multiple requests in one go. Thus if multiple oper-
ations can be merged in one request, then we must use this to save upon multiple
round trip times. It exposes the following functions:

– bool bind(string connection):


It takes a string as parameter describing the socket address of key value store.
It returns true if connection was successful, otherwise false. Note that this
class doesn’t bind to any table. Table name is to be provides as parameter to
each operation.

– void put(KeyType key, ValType val, string tablename):


It registers a put request to be executed later on given table, when execute
function is called on KVRequest object holding it.

– void get(KeyType key, string tablename):


It registers a get request with a given key to be executed later on the given
table.
4.4 Interface Design

– KVData del(KeyType key):


It registers a delete request with a given key to be executed later on the given
table.

– KVResultSet execute():
This function executes all the registered requests with KVRequest object and
returns a KVResultSet object, which holds results for all the requests.

– reset():
It resets the request queue and clears the KVRequest object.

• KVResultSet
This class holds the results obtained by executing the requests in queue of KVRe-
quest object. It has the following two functions:

– int size():
Returns the number of results present in KVResultSet object, it corresponds
the operations executed by KVRequest object.

– KVData get(int index):


It returns the output of the operation executed by KVRequest. Output is re-
turned in same order as the requests were queued in KVRequest object, ith
object represents the result of ith request in the queue, index stating from zero.
Chapter 5

Evaluation

This section describes the various experiments performed to evaluate the performance of
key value stores in virtual environment and their results. This section also discuses the
experiments performed on Distributed LTE-EPC in collaboration with Pratik Satapathy
[24].

5.1 Test Setup


All the experiments performed here were performed on virtual machines (VMs). Config-
urations parameters common to all virtual machines are listed in table 5.2. RAM and CPU
count for VMs changes with experiments and are described in the respective experiment
setups. All these virtual machines were hosted on single physical server, unless otherwise
stated. Configuration of server hosting VMs is described in table 5.1.

Table 5.1: Server Configuration.


Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
CPU
(Count 2, 12 cores on each, 24 with hyperthreading)
OS CentOS Linux release 7.2.1511 (Core)
RAM 64GB DDR4 2133MHz (32GB + 32GB NUMA)
HDD 2TB
NIC Dual port Intel’s I350 Gigabit Network Card (2 x 1Gbps)

5.2 Key Value Store Evaluation


To evaluate the performance of selected key value stores in virtual environment, we have
performed the following tests.

16
5.2 Key Value Store Evaluation

Table 5.2: Virtual Machine Configuration.


Hypervisor QEMU 1.5.3 ( libvirt 1.2.17 )
OS Ubuntu server 16.04 (linux 4.4.0-38-generic)
HDD 60 GB
NIC Virtio (up to 20 Gbps)

Table 5.3: Data store configuration for performance test


VMs Server Instances CPU RAM
LevelDB 1 1 6 4 GB
RAMCloud 2 1 server + 1 co-ordinator 6 + 1 4 + 4 GB
Redis 1 6 6 4 GB

5.2.1 Performance Test

This experiment tests the number of reads and writes supported by each data store
under varying load. Multiple configurations were tried for all data stores and the best
performing setup for each data store is used for comparison. Both client and server virtual
machines were pinned on different NUMA nodes. Client VM was configured with 12
CPU cores and 8GB of RAM. Server VM configuration for each data store was different.
For LevelDB server consisted of single virtual machine with 6 CPU cores and 4GB of
RAM and runs a single instance of LevelDB server. For RAMCloud there are two virtual
servers, one is with 1 CPU core and 4GB of RAM and runs the co-ordinator. Other server
had 6 CPU cores and 4GB of RAM and hosted the RAMCloud server. Redis runs the 6
instances of redis-server on single virtual machine with 4GB of RAM and 6 CPU cores.
Server configuration used for this test is described in table 5.3. All key value stores are
configured with equal amount of CPU cores and RAM. RAMCloud need a co-ordinator
instance and thus requires 1 extra VM, but co-ordinator doesn’t helps in performance and
thus it can be considered that all key value stores are configured with equal amount of
CPU cores and RAM. Each experiment was conducted for a period of 12 minutes, for
each data store. Readings were recorded by varying the number of client threads and
recording the average reads and writes. Each client thread runs in tight loop and either
does a read or write. Figure 5.1 shows the read performance of all the data stores over
varying load. Figure 5.2 shows the corresponding read latency for all the data stores.
Figure 5.3 shows the write performance of all the data stores and figure 5.4 shows the
corresponding latency for write operations.
5.2 Key Value Store Evaluation

100000
LevelDB
90000 RAMCloud
Avg. Read Operations/Second Redis
80000
70000
60000
50000
40000
30000
20000
10000
0
0 20 40 60 80 100 120 140 160 180 200
Thread Count

Figure 5.1: Read throughput with increasing number of client threads.

7000
LevelDB
RAMCloud
6000 Redis

5000
Read Latency (us)

4000

3000

2000

1000

0
0 20 40 60 80 100 120 140 160 180 200
Thread Count

Figure 5.2: Read latency with increasing number of client threads.

Observations :
5.2 Key Value Store Evaluation

60000
LevelDB
55000 RAMCloud
Avg. Write Operations/Second Redis
50000
45000
40000
35000
30000
25000
20000
15000
10000
5000
0 20 40 60 80 100 120 140 160 180 200
Thread Count

Figure 5.3: Write throughput with increasing number of client threads.

12000
LevelDB
RAMCloud
10000 Redis
Write Latency (us)

8000

6000

4000

2000

0
0 20 40 60 80 100 120 140 160 180 200
Thread Count

Figure 5.4: Write latency with increasing number of client threads.

1. LevelBD read throughput increases with increasing load and saturates at around
80000 read operations per second (ROPS) with peaks going upto 98000 ROPS.
Read latency increases linearly with increasing load and the best ROPS to read la-
5.2 Key Value Store Evaluation

tency ratio is achieved at client count 60, with average read latency of 610 microsec-
ond and 98225 ROPS. But the write throughput doesn’t goes up with increasing
load and gets saturated at around 16000 write operations per second (WOPS). The
average CPU utilization was at 35% of 6 CPUs, average disk utilization was at 58%
with frequent peaks going up to 66% and average number of write request to disk
were at 9000 with peaks going to 10000. Though nothing seems to be bottleneck, a
potential reason could be locking. But experiment with all client threads operating
on different tables (thus no lock contention) also have similar results. Another guess
is the disk may be bottleneck since it is being operated from virtual machine, but
this claim is yet to be verified. For reads the CPU operates at 60%, network band-
width was at 1.8 Gbps which is much lower then what can be achieved 20Gbps.
I also observed the memory bandwidth, disk reads, number of page faults, num-
ber of pages allocated, number of pages being freed, etc. but nothing seems to be
bottleneck.

2. RAMCloud read and write throughput increases up to 6 client threads, but falls
abruptly as we increase the number of clients beyond 6. Also both read and write
latency remains stable at less than 100 microsecond up to 6 threads, and increases
linearly after that. CPU utilization on server increases up to 6 threads and stabilizes
at 55% for read operations and 72% for write operations. Network peaks to 960
Mbps at 6 threads and stabilizes at 510 Mbps, for both read and write operation.
But here also nether memory nor disk seems to be bottleneck. But the latency re-
sults are comparable to those published on RAMCloud’s wiki [20]. I also wrote a
mail about this issue to the RAMCloud developers mailing list and they have ran the
same code on their machines but with different setup (physical machines instead of
VMs). Their results shows an increasing throughput with increasing client load, but
the throughput achieved by them at 32 threads is same as throughput achieved here
with 6 threads and they have mailed results only up to 32 threads. I have also tried
to conduct the similar experiment but instead of using physical machines directly, I
have client and server VMs hosted on different physical machines. Results shows
the increase in throughput with increasing load, and the output is stable later. But
the max throughput is now achieved at near 50 client threads, and that too is lower
than the results achieved at 6 threads with client and server VMs on same physical
machine. Also network is the bottleneck in the case of later setup, so the stability
cannot be guaranteed. Also note that the RAMCloud client CPU turns to full uti-
lization as the number of client threads becomes equal to number of CPU cores. But
the RAMCloud client bypasses the kernel’s network stack and uses polling mech-
5.2 Key Value Store Evaluation

anism for packet transfer, so we cannot say that the CPU is bottleneck at client,
since it is busy polling only when there is no work. Results for read operation of
this experiment are shown in figure 5.5, 5.6 and 5.7. I have also experimented by
increasing the client’s CPU core count to 24, but the results are very much similar,
to that of 12 CPU cores.

3. Redis’s read and write throughput increases with increase in load up to 60 client
threads, and starts saturating thereafter. CPU utilization for both, reads and writes
was at 45% of 6 CPUs and network utilization was at 880 Mbps. Here also, I
observed the memory and disk, average page faults per second for read was at
30000 and for write was at 45000, and page freed per second was at 180000. Disk
usage was below 1% with rare peaks, since AOF was turned off for this experiment.
Here also nothing seems to be bottleneck.

100
Client
90 Server

80
Avg. CPU utilization %

70

60

50

40

30

20

10
0 10 20 30 40 50 60 70 80
Thread Count

Figure 5.5: RAMCloud read CPU utilization when client and server VMs are on separate
machines.

5.2.2 Scalability Test

To test the scalability of each key value store on single virtual machine, I have varied
the number of virtual CPU cores allocated to server hosting data store, and RAM was fixed
at 4GB. Client VM was configured with 12 CPU cores and 8GB of RAM. Both client
and server virtual machines were on different NUMA node with no hyper-threading. I
have recorded the following observations. Peak number of read operations per second
5.2 Key Value Store Evaluation

55000
Avg. Opr/sec
50000

45000
Avg. Operations/Second

40000

35000

30000

25000

20000

15000

10000

5000
0 10 20 30 40 50 60 70 80
Thread Count

Figure 5.6: RAMCloud read throughput when client and server VMs are on separate
machines.

1600
Avg. latency

1400

1200
Latency (us)

1000

800

600

400

200
0 10 20 30 40 50 60 70 80
Thread Count

Figure 5.7: RAMCloud read latency when client and server VMs are on separate ma-
chines.

supported by each data store and the results of this experiment are shown in figure 5.8.
Peak number of write operations per second supported by each data store and the results
of this experiment are shown in figure 5.9. Each data point in these graphs were taken by
varying the number of client threads, and the best results are recorded for output. Each
client thread runs in tight loop and does either read or write to data store. Client thread
5.2 Key Value Store Evaluation

count was varied from 2 to 192 at 12 different points for each data store. Each such
reading was conducted for a period of 60 seconds. Total run time for this experiment was
216 minutes.

100000
RAMCloud
90000 LevelDB
Redis
Read operations per second

80000
70000
60000
50000
40000
30000
20000
10000
0
1 2 3 4 5 6
CPU Count

Figure 5.8: Scaling of reads with increasing number of CPUs on single virtual machine.

60000
RAMCloud
LevelDB
50000 Redis
Write operations per second

40000

30000

20000

10000

0
1 2 3 4 5 6
CPU Count

Figure 5.9: Scaling of writes with increasing number of CPUs on single virtual machine.
5.2 Key Value Store Evaluation

Observations :

1. LevelDB reads scales well with increasing number of CPU cores, and average la-
tency was at around 430 microseconds for all readings. But the writes gets saturated
at 16000 writes per second from 2nd CPU onward, and the average latency was at
750 microseconds. The average CPU utilization for writes was at 90% for 2 CPUs,
48% for 3 CPUs, and lower for more CPU cores. Average disk utilization for all
readings across varying CPUs was at 62% with frequent peaks going up to 68% and
average number of write request to disk were at 9000 with peaks going to 10000.
Again no bottleneck was identified.

2. RAMCloud requires at least 2 CPU cores to operate and thus at 1 CPU its through-
put is very low. It has best latency response for both, reads and writes, as compared
to other two. Average read latency was at 100 microsecond and for write it was
at 140 microseconds. But is operates well only at specific amount of load, here 6
clients, as can be seen from performance test. It’s reads were scaling upto 4 CPUs,
since the CPU was bottleneck till 4 CPUs, but the reads gets saturated after this at
around 60000 ROPS. I observed the memory bandwidth, network bandwidth, num-
ber of page faults, number of pages allocated, number of pages being freed, etc. but
nothing seems to be bottleneck. RAMCloud also performs well with writes as com-
pared to both of it’s counterparts, and writes seems to scale linearly with increasing
number of CPUs.

3. Redis is a single threaded server and is not designed to scale with number of CPUs.
Tests were conducted by running multiple redis-server instances (more or equal to
the number of CPU cores) on same virtual machine. But Redis doesn’t seems to
scale well with increasing number of CPU cores. Average latency for both read and
write was at 1650 microseconds. I have also experimented by varying persistence
options for Redis, but the results remains same. Results shown here are conducted
with AOF (append only file) turned off.

5.2.3 Distributed scaling of key value stores

Here we will only look at distributed scaling of RAMCloud and Redis, since Lev-
elDB doesn’t works in distributed environment. Each server VM used here for scaling has
2 CPU cores and 4GB of RAM. Client VM has 24 CPU cores with hyperthreading and
8GB of RAM. For RAMCloud, each server instance hosts single RAMCloud server plus
one extra VM for co-ordinator. For Redis, each server instance hosts two redis-servers,
5.2 Key Value Store Evaluation

since it is single threaded server. Figure 5.10 and 5.11 shows the scaling of reads and
writes respectively for RAMCloud and Redis over varying number of servers.

35000
RAMCloud
Redis

30000
Avg. read operations per second

25000

20000

15000

10000

5000

0
2 3 4 5
Number of Instance

Figure 5.10: Read throughput of RAMCloud and Redis with varying VM instances.

35000
RAMCloud
Redis

30000
Avg. write operations per second

25000

20000

15000

10000

5000

0
2 3 4 5
Number of Instance

Figure 5.11: Write throughput of RAMCloud and Redis with varying VM instances.
5.3 Evaluations when pluged with LTE-EPC

Observations :

1. CPU utilization for all RAMCloud instances was at 100% for all readings, and the
throughput increase is linear.

2. CPU utilization for all Redis instances was near to 90% for initial two readings,
and kept falling with increasing instances. Increase in the throughput of Redis is
not much and it doesn’t scales linearly. Here also nothing was identified to be
bottleneck.

5.3 Evaluations when pluged with LTE-EPC


All the experiments described here were performed by Pratik Satapathy for his thesis
work Designing a distributed NFV based LTE-EPC [24]. My contribution in these exper-
iments was to configure and monitor the key value stores. Setup for these experiment is
as follows. We used 3 instances of LevelDB, 2 of which where configured with 2 CPU
cores and 2GB of RAM, while 1 instance was configured with 4 CPU core and 4GB of
RAM. SGW was assigned 4 core instance, MME was assigned 2 core instance and PGW
and HSS were assigned to another 2 core instance. RAMCloud was configured with two
VMs, one 1 core VM with 4GB of RAM was assigned to co-ordinator and one 8 core
instance was assigned to RAMCloud server. Redis was configured to use 4 instances with
each having 2 CPU cores and 2GB of RAM. Hash map was used as local data store to
form a baseline for this experiment. Table 5.4 describes the test setup for data stores.
Graph in figure 5.12 shows the throughput of LTE-EPC control plane as a function of
increasing number of concurrent UEs for different key value stores. Figure 5.13 shows
the latency plot of attach/detach request as a function of increasing number of concurrent
UEs for different key value stores.

Table 5.4: Data store configuration for use in LTE-EPC


VMs Server Instances CPU RAM (GB)
LevelDB 3 3 4+2+2 4+2+2
RAMCloud 2 1 server + 1 co-ordinator 8 + 1 4+4
Redis 4 2 for all VMs 2 for all VMs 2 for all VMs

Observations :
5.3 Evaluations when pluged with LTE-EPC

4500
single system Hashmap
single system leveldb
single system ramcloud
4000 single system redis

3500
Number of registration per sec

3000

2500

2000

1500

1000

500

0
0 20 40 60 80 100 120 140 160 180 200
Number of concurrent UEs

Figure 5.12: Throughput of Distributed LTE-EPC with different data stores, Source :
Pratik Satapathy [24].

0.35
single system Hashmap
single system leveldb
single system ramcloud
single system redis
0.3

0.25
Latency in secs

0.2

0.15

0.1

0.05

0
0 20 40 60 80 100 120 140 160 180 200
Number of concurrent UEs

Figure 5.13: Latency of Distributed LTE-EPC with different data stores, Source : Pratik
Satapathy [24].
5.3 Evaluations when pluged with LTE-EPC

1. Hasmap was used as baseline for this experiment and, LevelDB was able to perform
at 0.46 times of hashmap, while RAMCloud’s and Redis’s performance was near to
0.29 time that of hashmap.

2. Bottleneck for this experiment was SGW’s CPU and not the data stores.

3. Peak usage for LevelDB instances were 43% for SGW, 25% for MME and 30% for
PGW and HSS. CPU usage for all data store instances kept increasing form 5% to
38% for SGW instance, 18% for MME instance and 26% for PGW instance, with
the increasing number of UEs. LevelDB’s average CPU utilization with 4 CPU
cores can go up to 70% and with 2 CPUs it can go up to 90%. Thus LevelDB was
not the bottleneck here.

4. Peak usage for RAMCloud was at 46% and the average usage after throughput
stabilized was at 36%. RAMCloud’s average CPU utilization with 6 CPU cores can
go up to 65% and thus RAMCloud was not the bottleneck here.

5. Peak usage for each of the Redis instance was at 40% and the average was at 34%.
Distribution of load over each instance was uniform. Redis’s average CPU utiliza-
tion with 2 CPU cores can go up to 90% and thus Redis was also not the bottleneck.

6. Form the observations from previous experiments, RAMCloud must have per-
formed better but here LevelDB performs well among all the key value stores, since
it has better average read performance. RAMCloud only performs better at spe-
cific load and thus is at second position in performance. Though Redis has stable
throughput, its average latency for both read and write is higher than other two and
thus it’s performance is comparatively low.
Chapter 6

Conclusion & Future Work

Due to raise in internet traffic, we need to scale the network functions, to meet the
increasing demand. One of the way to scale NFs is by using shared state store based
horizontal scaling. Throughout this report we have seen the performance of LevelDB,
RAMCloud and Redis when used in virtual environment. We have also evaluated the
performance of LevelDB, RAMCloud and Redis when applied to Distributed LTE-EPC
[24]. Though the features described by RAMCloud were promising, it didn’t perform
well in our evaluation. LevelDB seems to give best results for our evaluation, but we have
not evaluated LevelDB at higher scales. Though Redis promises distributed scalability,
our evaluations have not been able to achieve that scale.

Future Work

• To identify the bottleneck for each key value store.

• To achieve the claimed performance of each key value store in virtual environment.

29
References

[1] Antirez Blog, Accessed: 2016-10-12, http://oldblog.antirez.com/post/


redis-memcached-benchmark.html

[2] Atikoglu, B., Xu, Y., Frachtenberg, E., Jiang, S., and Paleczny, M., 2012 Jun.,
“Workload analysis of a large-scale key-value store,” SIGMETRICS Perform. Eval.
Rev. 40, 53–64.

[3] BoltDB, Accessed: 2016-10-12, https://github.com/boltdb/bolt

[4] Cattell, R., 2011 May, “Scalable sql and nosql data stores,” SIGMOD Rec. 39, 12–
27.

[5] Data on Disk Storage Cost, Accessed: 2016-10-12, http://www.


statisticbrain.com/average-cost-of-hard-drive-storage/

[6] DeCandia, G., Hastorun, D., Jampani, M., Kakulapati, G., Lakshman, A., Pilchin,
A., Sivasubramanian, S., Vosshall, P., and Vogels, W., 2007 Oct., “Dynamo: Ama-
zon’s highly available key-value store,” SIGOPS Oper. Syst. Rev. 41, 205–220.

[7] Dormando Blog, Accessed: 2016-10-12, http://dormando.livejournal.com/


525147.html

[8] Dragojević, A., Narayanan, D., Hodson, O., and Castro, M., 2014, “Farm: Fast re-
mote memory,” in Proceedings of the 11th USENIX Conference on Networked Sys-
tems Design and Implementation, NSDI’14 (USENIX Association, Berkeley, CA,
USA). pp. 401–414.

[9] Escriva, R., Wong, B., and Sirer, E. G., 2012, “Hyperdex: A distributed, searchable
key-value store,” in Proceedings of the ACM SIGCOMM 2012 Conference on Appli-
cations, Technologies, Architectures, and Protocols for Computer Communication,
SIGCOMM ’12 (ACM, New York, NY, USA). pp. 25–36.

30
References

[10] Fitzpatrick, B., 2004 Aug., “Distributed caching with memcached,” Linux J. 2004,
5–.

[11] Lakshman, A., and Malik, P., 2010 Apr., “Cassandra: A decentralized structured
storage system,” SIGOPS Oper. Syst. Rev. 44, 35–40.

[12] LevelDB, Accessed: 2016-10-12, http://leveldb.org/

[13] Lim, H., Fan, B., Andersen, D. G., and Kaminsky, M., 2011, “Silt: A memory-
efficient, high-performance key-value store,” in Proceedings of the Twenty-Third
ACM Symposium on Operating Systems Principles, SOSP ’11 (ACM, New York,
NY, USA). pp. 1–13.

[14] Lim, H., Han, D., Andersen, D. G., and Kaminsky, M., 2014, “Mica: A holistic
approach to fast in-memory key-value storage,” in Proceedings of the 11th USENIX
Conference on Networked Systems Design and Implementation, NSDI’14 (USENIX
Association, Berkeley, CA, USA). pp. 429–444.

[15] Liu, S., Nguyen, S., Ganhotra, J., Rahman, M. R., Gupta, I., and Meseguer, J., 2015,
“Quantitative analysis of consistency in nosql key-value stores,” in Proceedings of
the 12th International Conference on Quantitative Evaluation of Systems - Volume
9259, QEST 2015 (Springer-Verlag New York, Inc., New York, NY, USA). pp. 228–
243.

[16] LMDB, Accessed: 2016-10-12, https://symas.com/products/


lightning-memory-mapped-database/

[17] LTE Wikipedia, Accessed: 2016-10-12, https://en.wikipedia.org/wiki/


LTE_(telecommunication)

[18] Olson, M. A., Bostic, K., and Seltzer, M., 1999, “Berkeley db,” in Proceedings of the
Annual Conference on USENIX Annual Technical Conference, ATEC ’99 (USENIX
Association, Berkeley, CA, USA). pp. 43–43.

[19] Ousterhout, J., Gopalan, A., Gupta, A., Kejriwal, A., Lee, C., Montazeri, B., Ongaro,
D., Park, S. J., Qin, H., Rosenblum, M., Rumble, S., Stutsman, R., and Yang, S.,
2015 Aug., “The ramcloud storage system,” ACM Trans. Comput. Syst. 33, 7:1–
7:55.

[20] RAMCloud Wiki, Benchmark, Accessed: 2016-10-12, https://ramcloud.


atlassian.net/wiki/display/RAM/clusterperf+September+29%2C+2014
References

[21] Redis, Accessed: 2016-10-12, http://redis.io/

[22] Rocks DB, Accessed: 2016-10-12, http://rocksdb.org/

[23] S., S. N., 2015, Implementation of NFV-based LTE EPC, Master’s thesis (Computer
Science And Engineering, Indian Institute of Technology Bombay, Mumbai, India).

[24] Satapathy, P., 2016, Designing a distributed NFV based LTE-EPC, Master’s the-
sis (Computer Science And Engineering, Indian Institute of Technology Bombay,
Mumbai, India).

[25] Scalability, "Accessed: 2016-10-12", https://en.wikipedia.org/wiki/


Scalability

[26] Veronika Abramova, P. F., Jorge Bernardino, Accessed: 2016-10-12, “Experimental


evaluation of nosql databases,” http://airccse.org/journal/ijdms/papers/
6314ijdms01.pdf

[27] Zhang, K., Wang, K., Yuan, Y., Guo, L., Lee, R., and Zhang, X., 2015 Jul., “Mega-
kv: A case for gpus to maximize the throughput of in-memory key-value stores,”
Proc. VLDB Endow. 8, 1226–1237.
Acknowledgements

I am grateful to Prof. Mythili Vutukuru for her constant guidance and motivation through-
out the course of this project. I would also like to thank my colleague Pratik Satapathy
for the support and knowledge he shared with me. I am also thankful to all my friends
and family for their constant support and encouragement.

Dave Jashkumar
IIT Bombay
16 October 2016

33

Anda mungkin juga menyukai