Introduction
Network Architecture is commonly used to describe a set of abstract principles for the
technical design of protocols and mechanisms for computer communication. Network
architecture represents a set of deliberate choices out of many design alternatives, where these
choices are informed by an understanding of the requirements.
The traditional enterprise architecture is often criticized as being expensive, complex and
proprietary. One of the reasons for that characterization is that in the traditional enterprise
devices, such as servers, storage, LAN switches, firewalls and load balancers are dedicated to
a single service or application. This approach results in stranded capacity and hence leads to
an increase in the overall cost of Enterprise IT. This approach also increases the time it takes
to deploy a new service or application since new infrastructure must be designed, procured, 1
installed and tested before a new service or application can go into production.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
Another reason why the traditional architecture has such negative characteristics is because
the key components of the infrastructure (e.g., compute, storage, and networking) are
hardware centric. Using networking as an example, network components such as switches and
routers have traditionally been based on dedicated appliances and each appliance is itself
based on dedicated hardware such as ASIC (Application-Specific Integrated Circuit). The
appliances are proprietary and the evolution of the ASIC‟s functionality is under the control
of the provider of the appliance. Making matters worse, each appliance is configured
individually and as a result, tasks such as provisioning and change management are very time
consuming and error prone.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
simple definition, we have examples of the constituents of the architecture and how it is
applied. Network architecture must typically specify:
Where and how state is maintained and how it is removed.
What entities are named.
How naming, addressing, and routing functions inter-relate and how they are
performed.
How communication functions are modularized, e.g., into “layers” to form a “protocol
stack”.
How network resources are divided between flows and how end-systems react to this
division, i.e., fairness and congestion control.
Where security boundaries are drawn and how they are enforced.
How management boundaries are drawn and selectively pierced.
How differing QoS is requested and achieved.
A new architecture should lead to greater functionality, lower costs, and increased
adaptability for all types of communication. The development of architecture must be guided
in part by an understanding of the requirements to be met. It is therefore vital to articulate a
set of goals and requirements. The technical requirements for the Internet have changed
considerably since 1975, and they will continue to change.
mobile terminals, but they may also be connected to a pre-existing network infrastructure
(such as fixed access networks or cellular systems) and use it to access hosts which are not
part of the ad hoc network.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
Next, some inconsistencies between the extensions and the principles of the Internet
architecture will be discussed. As Internet growth with introduction of Web, two important
techniques were developed to reduce the rate of addressing consumption: NAT and DHCP.
These two approaches violate the Internet principles of use a unique and globally valid
address for each host and the transparency (end-to-end argument). NAT uses private
addresses not visible globally and the address translation breaks end-to-end connectivity.
Dynamic assignment of addresses by DHCP, assigns different addresses to a given host each
time it is connected to the network. The addresses are usually stable for a time period but can
be reassigned for the ISP. Then, NAT and DHCP prevent use hosts in environment in which
other hosts (peers) needed to contact them directly such as peer-to-peer networks. They built a
barrier to move Internet towards a decentralized peer-to-peer model. Some mechanisms have
been proposed to traverse NAT, namely NAT transversal (NAT-T) and especially used in
peer-to-peer, VoIP and online games. Some NAT-T protocols are STUN and TURN. STUN is
not a self-contained NAT traversal solution applicable in all NAT deployment scenarios and
does not work correctly with all of them. TURN allows the host to control the operation of the
relay and to exchange packets with its peers using relay. This protocol has been standardized
in RFC 5766. SIP also does not work through NAT unless NAT are extended with NAT-T.
Nevertheless, NAT and DHCP are very useful for facilitate renumbering and NAT provides
security. IPSec is fully consistent with the simplicity principle of Internet, encrypting the
information at the transport layer (end-to-end argument) but do not function properly with
NAT, neither with caching mechanisms.
Firewalls also violate the end-to-end argument because hosts behind a NAT/Firewall can not
receive incoming traffic initiated by a foreign host. On the other hand, DNS provides hosts
identification via domains and was the first step towards an identification/localization
decoupling necessary for IP mobility.
IPv6 aims to restore the Internet architectural principles of simplicity and transparency, by
increasing the addressing space and eliminating NAT. Moreover, it adds auto-configuration
functionality to IP, making it “plug-in-play” and together IPSec provides security.
5
Nevertheless, to adopt IPv6 is painful, expensive and requires mechanisms for IPv4 and IPv6
interoperability and other protocols v6. IntServ and DiffServ violate the Internet principle of
simplicity, making IP routers job harder, increasing the IP layer processing and complexity.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
Web based technologies will play a vital role in educating the masses, who do not attend
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
formal schools or colleges. The concept of data warehousing and data mining is fast changing
and soon there will be virtual data warehouses.
There is world wide demand for broadband communications, which is ever increasing at a fast
rate. A data communication network basically consists of two main components viz. i) Wide
Area Network (WAN) and ii) Local Area Network (LAN). These two networks are
distinguishable from each other by the physical area they encompass. A WAN is used for
inter-city, inter-country or inter-continental communication. On the other hand, a LAN is used
for connecting computers in a building, a university campus i.e. over a comparatively smaller
area. Inter-continental traffic is carried either through satellite links or through under sea
submarine cables.
The communications are broadly divided into two categories; namely, wired and wireless
communications (see Fig. 3). The wireless systems mainly include satellite systems, cellular
systems, paging systems, Bluetooth and wireless LANs. Recent researchers are mainly
focusing on performance issues of wireless LANs, i.e. wireless fidelity (Wi-Fi) and
worldwide interoperability for microwave access (WiMAX). Wi-Fi is based on the IEEE
standard 802.11 while WIMAX is based on IEEE 802.16. Both standards are designed for the
Internet protocol applications. Both Wi-Fi and WiMAX wireless networks support real-time
applications such as Voice over Internet Protocol (VoIP) and are frequently used for wireless
Internet access. The WiMAX standard provides fixed services (IEEE 802.16d-2004) as well
as mobility services (IEEE 802.16e-2005). The WiMAX standard provides fixed services
(IEEE 802.16d-2004) as well as mobility services (IEEE 802.16e-2005). However, WiMAX
has some issues, namely, frame allocation, scheduling, traffic management, security and
performance issues.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
Wireless communications and networking have been continually evolving to improve and be
a part of our lifestyle. This uninterrupted development is due to a lot of research projects and
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
practices that are being carried out to improve the quality of services and applications
supported by networking technologies. Initially, the plethora of computer networks research
was designed to allow users to share the thoughts and facts using textual data through
addressing devices. Meanwhile, we have witnessed that the users actually play the role of
both producers and consumers at the same time. These new emerging requirements gave birth
to the Cloud Computing (CC), Data Centric Networking (DCN), and other advancements in
IEEE standards.
Wi-Fi and LTE are the most prominent wireless access technologies nowadays. The migration
from PCs to mobile devices leads to an exponential increase of data usage in wireless
technologies. The 84.5MHz of unlicensed spectrum in the 2.4 GHz band which is allocated
for Wi-Fi has been saturated and is unusable for new wireless applications. Therefore, Wi-Fi
stakeholders have been showing interest in using the 5 GHz bands. There is up to 750MHz of
unlicensed spectrums in the 5 GHz band that falls under the Unlicensed National Information
Infrastructure (U-NII) rules of the FCC.
During the past few years, video streaming (e.g., Internet Protocol Television (IPTV),
Video on Demand (VoD), online video conference, and mobile video gaming) has
become a killer Internet multimedia application. According to a recent Cisco study and
forecast, video content will account for a massive 80% of global Internet consumption
by 2019. With the rapid growth of video applications and driven by ever-growing
demands from Internet users, the content-rich video streaming will be the most
attractive service in future networks. However, the current best-effort Internet incurs
many challenging issues as a platform for video streaming services: today’s Internet
does not provide any quality of service guarantees to video content delivery; the ever-
growing user interests in the variety of live and online video underscore the need to
offer advanced video communication technologies, infrastructures, and applications
for such real-time video streaming services. In the meantime, information-centric
networking (ICN), software-defined networking (SDN), mobile edge computing
(MEC), and network function virtualization (NFV) technologies are emerging as
building blocks for the future networks.
5G technology will power wide range of future industries from retail to education, 9
transportation to entertainment and smart homes to healthcare. It will make mobile more
essential than it is today. Researchers predict the global, social and economic impact of 5G,
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
which will benefit entire economies and society. It is expected to produce trillions of worth
revenue in coming years. The applications of 5G technology are:
High-speed mobile network
Entertainment and multimedia
Internet of Things – Connecting everything (Smart Home, Security and surveillance,
Drone Operation, Autonomous Driving, Healthcare and mission critical applications,
Fleet management, Smart farming, Industrial IoT, Smart cities, Logistics and
shipping).
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
only on the design of the physical (PHY) and Medium Access Control (MAC) layers but also
on the upper-layer protocols.
2.1. Requirements for Next-Generation Vehicular Communication
Networks
Next-generation automotive systems are expected to provide multiple services with diverse
goals and requirements. However, providing an exhaustive list of all applications that can
possibly be offered through vehicular communication systems is rather difficult, considering
their large number and wide variety. In the following, therefore, we focus on four “macro-
applications” that, for their generality, complementarity and significance we believe are good
representatives of the main types of next-generation automotive services. Although the
requirements of such services are not yet fully specified, some qualifying characteristics can
be outlined as follows:
Infotainment generically refers to a set of services that deliver a combination of
information and entertainment. Infotainment requires low latency and stable throughput
(especially for streaming of high-quality video contents) and the dynamic maintenance of a
multicast communication (e.g., for gaming), which can be an issue. Reliability
requirements are typically loose for these services.
Basic Safety services are typically characterized by very strict requirements. While the
size of the exchanged safety messages is typically small (up to a few hundreds of bytes),
latency must be very small to ensure prompt reactions to unpredictable events. V2X
connections must also be very reliable and stable, due to the sensitive nature of the
exchanged information and the potential consequences of a communication failure.
Cooperative Perception services deal with the enhancement of the sensing capabilities of
a vehicle by sharing information with neighboring vehicles and infrastructures, with the
final goal of extending the perception range of the driver beyond the line-of-sight or field-
of-view of one single vehicle.
Platooning refers to the services that make it possible for a group of vehicles that follow
the same trajectory to travel in close proximity to one another, nose-totail, at highway
speeds. A significant amount of information needs to be shared by V2X communications.
In addition to the strict latency requirement, the connection reliability and stability are also
very critical. 11
Fig.5. provides a visual and qualitative comparison of the different requirements of the above-
listed target applications. As we can see, strict reliability constraints and small latency values
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
are common to all such applications, while stable communications should be guaranteed
especially for safety-related services.
Fig. 5. Visual representation of the features and requirements of the vehicular applications
and services presented in this section
From the above discussion, it is apparent that advanced vehicular services are expected to
challenge the capabilities of current communication technologies, calling for innovative
solutions. In the following, in particular, we discuss some specific requirements for the V2X
communication system, called Communication Key Performance Indices (CKPIs) that go
beyond the classical Quality-of-Service (QoS) metrics such as minimum required bitrate,
maximum end-to-end latency and jitter, and maximum packet loss probability.
Range. In a vehicular scenario, the classical QoS requirements have to be associated with a
spatial range. In fact, most services will set tighter communication requirements toward
nearby vehicles, so that the resulting aggregate CKPI requirements are likely to be stricter
in close proximity of the transmitter, and progressively more relaxed with distance.
Speed. The CKPIs should also account for the speed of the vehicles. In fact, stricter
requirements (e.g., in terms of latency and connection stability) are usually associated to
faster nodes, whose communication quality levels should therefore be monitored more
closely than for slower vehicles.
Directionality. CKPIs may also depend on the communication direction. For example,
safety services are likely to require higher transmission capacities toward the vehicles
located within the range of the transmitting beam, to avoid accidents. Therefore, the space-
dependent characterization of the CKPIs described above can actually be anisotropic in
space, with some directions that are more demanding than others in terms of
communication requirements.
12
Nodes Density. The CKPIs can also be affected by the density of nodes. On the one hand,
a higher density may require a higher bitrate to maintain coordination among the cars. On
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
the other hand, the bitrate and reliability requirements for broadcasting information may be
relaxed in the presence of multiple vehicles that share the same data content.
Broadcast and Multicast. A significant portion of vehicular applications requires
broadcast-type V2V communications (typically enabled by omnidirectional radios).
While the supplemental use of directional radios (e.g., as in mmWave) could help
improve the communication performance, they typically do not provide native support
for broadcast. Some other services, such as platooning, may require multicast links
instead. The support of both broadcast and multicast connectivity, hence, represents
another CKPI for vehicular systems.
2.2. Enabling Radio Technologies for Next-Generation Vehicular
Networks
In this section, we present features and limitations of target RTs that are expected to play a
key role in next-generation automotive applications.
a) Dedicated Short-Range Communications (DSRC)
The IEEE 802.11p standard supports the PHY and MAC layers of the Dedicated Short-Range
Communications (DSRC) transmission service. It can operate without a network
infrastructure, removing the need for prior exchange of control information and thus bringing
a significant advantage in terms of latency. However, the throughput and delay performance
can degrade as the network load increases (e.g., due to high user density), mainly because of
the limited bandwidth and the “hidden node” problem. Furthermore, some V2X applications
may require reliable transmissions beyond the communication range of IEEE 802.11p, which
is typically limited to hundreds of meters. Moreover, the maximum data rate supported by
DSRC, between 6 and 27 Mbps for each channel, may not be sufficient to sustain the
transmission rates required by some next-generation automotive applications. For instance,
high resolution sensors may require more than 50 Mbps, while rates produced by cameras
range from around 10 Mbps for low-resolution compressed images up to around 500 Mbps for
high-resolution images.
b) Long-Term Evolution (LTE) Cellular
LTE offers ubiquitous coverage and collision-free packet transmission, but the support of
vehicular communication services may still be limited. For example, access and transmission
latency increase with the number of users in the cell, thus raising scalability issues. Despite 13
the almost ubiquitous coverage of LTE, still the connection may not be always available, or
good enough to satisfy the stringent reliability requirements under weak coverage (e.g., in
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
tunnels, underground parking lots, rural areas, mountains). Finally, the maximum data rate of
4G-LTE systems is limited to around 100 Mbps for high mobility (though much lower rates
are typical), which may not be sufficient to handle the potential gigabit rates that can be
generated by next-generation vehicles.
c) Wi-Fi
Wireless networking based on the IEEE 802.11 standard, i.e., Wi-Fi technology, is popular
and broadly available at low cost for home networks. Raw data rates from 10 to 300 Mbps
have proven to scale to several hundreds of concurrently active users when properly designed.
However, Wi-Fi is mainly used by stationary or slowly moving indoor and outdoor users
while, in a vehicular context, high mobility and link instability must be considered. Moreover,
despite the high data rate, still the transmit speed may be insufficient to fully satisfy the
requirements of some next generation automotive applications. Finally, despite the huge
popularity of Wi-Fi, the availability of access points that can be potentially used for V2X
communications (e.g., at street corners, co-located with traffic lights, in parking lots, gas
stations, cars, and so on) is still scarce, and the resulting intermittent connectivity will affect
both the data rate and the latency of many vehicular services.
Table 1. provides a schematic and qualitative comparison of the requirements of the above-
listed target RTs. As we can see, most radio interfaces ensure wide range, but with relatively
high latency and small/medium throughput, while only a few technologies (i.e., mmWave and
VLC systems) can provide high throughput.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
driven the redesign of the communication stack. In this section, we propose some guidelines
for the design of next-generation MAC, network and transport layers specifically tailored to
high-frequency vehicular communication systems.
a) Medium Access Control (MAC) Protocol Design
Medium Access Control (MAC) layer design has been extensively studied in the context of
DSRC and 4G-LTE, while only a limited amount of literature has investigated solutions for
other types of radios that are expected to be available in next-generation automotive systems.
Conventional MAC solutions (Fig.6.) are suitable for situations in which the velocity/position
of the vehicles can be accurately predicted. However, this may not be the case for V2X
communication systems operating at high frequencies, mainly due to the intrinsic variability
of the channel. Moreover, most recent solutions lack consideration of some important KPIs
like reliability and delay. In particular, mmWave radio links require new schemes to enable
vehicles and infrastructures to quickly determine the best directions to establish directional
links. This functionality can be hardly supported by traditional communication protocols,
which are often significantly affected by the high speed of the nodes and by the presence of
frequent blockages on the propagation path.
Fig. 6. Proposed solutions for MAC protocol design for vehicular networks
b) Network and Routing Protocol Design
While the literature on network protocols3 for legacy vehicular scenarios is quite rich, little
work exists regarding the communication performance of the network layer (especially
routing) in a next-generation V2X context. More specifically, traditional routing solutions can
be classified into two categories, as reported in Fig. 7: (i) topology-based routing protocols,
and (ii) position-based routing protocols.
15
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
Fig. 8. Proposed solutions for transport protocol design for vehicular networks
Like any other computing system, vehicular communication networks can be plagued by 16
vulnerabilities: connected nodes must thus be designed with security in mind, in order to limit
the adversaries‟ ability to endanger vehicle operations, as well as driver and passenger safety.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
millions of applications. The 5G networks, however, raise a number of issues that need to be
tackled. For instance, how to guarantee that the devices are interacting with each other with a
latency of less than one millisecond?
Although such concern is not a critical requirement in general telecommunication systems
with only voice or data services, it is vital in some specific areas with particular services, such
as healthcare, military, and disaster communications systems.
The applications of 5G cellular networks will spread across smart city infrastructure with high
capacity storage, intelligent transportation, and smart communication systems. The 5G
evolution is being fueled by a number of factors such as the explosion of mobile data traffic,
the increasing demand for high data rates, and the growth in connected and searchable devices
for low-cost, energy-saving, and environmentally friendly wireless communications.
This chapter is devoted to outlining various research directions for the 5G networks that have
been or are being studied and examined with new design challenges as well as their relevant
works will be provided.
3.2. Evolution of Mobile Technologies from 1G to 5G
Starting with the first-generation (1G) mobile communications systems launched by Nippon
Telegraph and Telephone (NTT) for the first time in 1979, the 1G systems were then
employed worldwide in the 1980s. In the 1G systems, analog wireless access with
narrowband frequency division multiple access (FDMA) is employed with a channel spacing
of around 25–30 kilohertz (kHz). Then, the second-generation (2G) systems, i.e., North
American Interim Standards 54 and 136 (IS-54/136), European standards Global System for
Mobile (GSM), and Japan standards Personal Digital Cellular (PDC), were deployed in the
1990s, all of which adopted time division multiple access (TDMA) with the channel spacing
ranging from 25 to 200 kHz.
Along with FDMA and TDMA, a wireless access technique based on code division multiple
access (CDMA) was developed and standardized first in the IS-95 by Qualcomm in 1995 with
its initial version called cdmaOne. With the CDMA, the channel spacing is 1250 kHz, which
is much wider compared to the traditional 2G systems. The data transfer rate in the 2G
systems is however only around 9.6 kilobits per second (kbps) and does not meet the
requirements of multimedia communications with high resolution. This accordingly motivated
the development of the third-generation (3G) systems. The 3G systems were expected to
18
provide advanced services with much higher data rates of megabits per second (Mbps). The
first 3G networks were introduced by NTT DoCoMo in Japan in 1998 and were commercially
launched in October 2001 also by NTT DoCoMo. The 3G systems were then deployed
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
worldwide; for instance, in South Korea in January 2002 by SK Telecom, in the United States
in 2002 by Monet Mobile Networks, and in the United Kingdom in 2003 by Hutchison
Telecom.
With higher data rate requirements, the cellular networks kept evolving from the 3G to the
fourth-generation (4G) systems. As a candidate standard for the 4G systems, the first-release
Long-Term Evolution (LTE) standard was commercially deployed in Norway and Sweden in
2009. The 4G systems enable a very high data transmission rate of up to 1–1.5 Gigabits per
second (Gbps) for low-mobility communication, such as pedestrians, stationary users,
nomadic and local wireless access, and up to 100 Mbps for high-mobility communication,
such as mobile access from trains and cars. The 4G technologies were regarded as the future
standard of wireless devices, allowing users to download and transfer high-quality
multimedia. There exist two standard core technology of the 4G networks, including
Worldwide Interoperability for Microwave Access (WiMAX) and LTE using different
frequency bands. The LTE has switched to LTE-Advanced (LTE-A) since the fall of 2009
with many various LTE services started to launch in South Korea, the United States, and the
United Kingdom in 2012.
Beyond the current 4G systems with LTE-A standards, the fifth-generation (5G) wireless
systems have been proposed to be the next telecommunications standards aiming at providing
a higher capacity, a higher reliability, and a higher density of mobile broadband users. In
order to address the high traffic growth and increasing demand for high-bandwidth
connectivity, the development of 5G networks becomes crucial to support a massive number
of connected devices with real-time services and high-reliability communications in critical
applications. By providing wireless connectivity for a diversity of applications from wearable
devices, smartphones, tablets, and laptops to utilities within smart homes, transportation, and
industry, the 5G networks are promising to provide ubiquitous connectivity for any kind of
devices, enabling and accelerating the development of Internet of Things (IoT).
Further than improving solely the maximum throughput, the 5G systems are expected to
provide lower power consumption dealing with battery issues, concurrent data transfer paths,
lower outage, and better coverage for cell-edge and high-mobility users. Moreover, the 5G
systems are required to be more secure, better cognitive functionality via software-defined
radio (SDR), and artificial intelligent (AI) capabilities.
19
With the employment of the SDR, a lower infrastructure cost could be favorably achieved,
which might help reduce the traffic fees while the users can experience high-quality
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
20
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
continuous exchange of data puts a strain on the network and the battery life of the devices.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
The new wireless network will see a 90% reduction in network energy usage, with up to 10
years worth of battery life for low power IoT devices.
It‟s a long way to when 5G becomes mainstream, but businesses need to start now with
developing and reimagining services and products to leverage 5G‟s superior capabilities. Here
is a look at a few industries where 5G and IoT together can bring about disruptions:
Self-driving cars: Sensors on self-driving cars generate a large amount of data,
measuring temperature, traffic conditions, weather, GPS location etc. Producing and
assimilating such quantity of data eats up a lot of energy. Such cars are also heavily
reliant on real time transmission of information to provide optimum services. However,
with high speed connectivity and low latency, it will become possible for these intelligent
cars to constantly collect all sorts of data, including time-critical data, on which
algorithms can work on to autonomously keep track of the working condition of the car
and improve future designs.
Healthcare: The medical field will also see improvements in their services as all sorts of
medical devices become IoT enabled. Rural areas and other similar remote locations
without proper healthcare facilities will hugely benefit from IoT connectivity. With such
low latency, world class healthcare services like surgeries performed remotely become a
possibility.
Logistics: 5G connectivity will make it possible for sophisticated IoT tracking sensors
that could transform logistics operations from end to end. Not only will the high speeds
and low latency make it possible to gather data in real time, but the energy effectiveness
will also make it possible for them to accumulate data of more varied nature, at all points
within a supply chain and for a very long period of time. A consumer would have access
to detailed information like where the fish she just bought was caught, what temperature
was it stored in during transportation and when was it delivered to the retailer.
Smart cities: 5G will enable wider applications in smart city initiatives from water and
waste management, traffic monitoring to enhanced healthcare facilities. Smart Cities will
enjoy the benefits of the new generation network as more and more sensors make their
way into city infrastructure. Not only will 5G be capable of handling the massive data
load, but it will also make the integration of various intelligent systems, constantly
communicating with each other a reality, bringing the vision of a truly connected city
22
closer.
Retail: IoT for Retail will see a positive impact from the coming of 5G as they attempt to
shape customer engagement and experiences through mobile phones. Better connectivity
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
and larger number of devices connected to the network will allow them to interact with
shoppers faster with improved digital signage. New and innovative ways of customer
engagement that incorporate Augmented Reality and Virtual Reality will become more
popular. Retailers will be able to enhance the shopping experience by driving
omnichannel retail practices more effectively.
5G and IoT together will also help in bringing every item on the shelf to the internet by
creating digital twins for them. If the number of hardware connected devices is expected to
be in the billions, the potential for ordinary consumer products with digital twins to be a part
of the new Internet of Things is considerably more. Consumer products need not be
constantly connected to the internet like hardware devices but will be able to send and
receive information about themselves as connected smart products on event-based
interactions with consumers and other entities through scanning, RFID readers, NFC taps and
more. The current wireless infrastructure is not up to the task of managing so many products
on the network, but 5G will make it possible. Smart Packaging and Digital Labels will
transform the way retailers manage inventory and track logistics as well as provide a hotbed
of innovation to use them as a means to creatively interact with consumers.
Chapter 4: Data Forwarding in Future Networks
The recent growth of Internet traffic and the forecast of new applications on context of fifth
generation of network technology (5G) leads to changes in the communication paradigm of
future Internet. The Information Centric Networking (ICN) is a new architecture that can cope
with the emerging pattern of communication based on named-data instead of their locations.
In ICN architecture, Forwarding Strategies (FS) are a relevant feature to improve the network
performance. Therefore, this chapter examines forwarding strategies for an ICN scenario with
new 5G applications such as smart grids networking.
4.1. Information Centric Networking (ICN)
There is a need to support a growing demand for faster and massive content delivery for 5G
applications, such as vehicle-to-vehicle, virtual and augmented reality services, industrial
automation, smart homes and smart grids, each of these applications have different
requirements, latency, peak data throughput and connection density, that often cannot be
achieved by current networks.
Information Centric Networking (ICN) is a promising candidate for future Internet 23
architecture that can improve the communication for 5G scenarios, with native features such
as content aggregation, dynamic forwarding and distributed network cache. Currently, there
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
are many research efforts in Forwarding Strategies (FS) direction for ICN. Therefore, this
paper aims to investigate three different forwarding strategies widely used for a scenario with
a 5G application and evaluate its performance in terms of delay and throughput.
In ICN the data delivery process occurs in a pull-based fashion through the exchange of two
kinds of packets, interest and data. Both types of packets carry a name that identifies the
chunk of data that is requested, the interests are sent by a consumer and driven through
forwarding strategies to a copy of the requested item. Once the interest reaches a node that has
the requested data in cache, the node will return a data packet which carries both the name
and the content by the reverse path.
The FS are responsible to provide the intelligence to make a decision for each interest packet
on which outgoing interface it will be forwarded. Because ICN is named-based, this can be
done in a more aware fashion than current protocols over end-to-end networks. In this paper
different FS will be employed, that are explained below:
Multicast: The Multicast strategy is based on overload, increasing the link
redundancies that can contribute to improve the delivery reliability. Therefore, it sends
every interest to all interfaces (except downstream).
Best-Routing: This strategy forwards a new interest to the lowest-cost link (except
downstream). If a retransmission of an interest is received, it is forwarded to the
lowest-cost interface that was not previously used.
NCC (Content-Centric Networking): This strategy uses the lowest delay interface to
forward packets and, in timeout case, sends interests to others routes to discover
upstream with lowest delay. The timeout is initialized between 8 and 12 milliseconds.
If the forwarding strategy get a response within the timeout, it is decreased by 1/128,
otherwise, it is increased by 1/8.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering
The belief is that this conceptually simple shift will have far-reaching implications for how
people design, develop, deploy, and use networks and applications.
The core problem of IP networking is that it is location based. Everything has an IP address
that defines “where” the location is. However, we live in an information-centric world.
Thence, instead of using IP addresses, which is the “where,” why not use something that
describes the content; the “what”.
With Named Data Networking, the naming schema at the application data layer becomes the
names at the networking layer. Therefore, there is no need for IP addresses anymore or
upgrades to IPv6. We still have the open systems interconnection model (OSI) model and the
protocol stack but we are just taking out the IP part.
Let‟s understand it with an example. Today, when we dial phone numbers, although, they are
numbers, but the physical addresses are actually the names. If I want to send a package to
someone, I send it to the physical address that has a name attached to it. Those names can be
descriptive and hierarchical. Just take a look at the ZIP code format. Let‟s say, I want to ship a
box to 100 people in the neighborhood. I can do it without having everyone‟s complete
address. It can be done with names based on the hierarchical structure of the ZIP code. This
hierarchy naming concept is similar to that of Named Data Networking.
Named Data Networking started in 2010 as an NSF research project that was used to create
the architecture for the future Internet. Today, it completely changes the paradigm used by
traditional networks.
Named Data Networking is a network service that has been evolving the Internet‟s host-based
packet delivery model. NDN directly retrieves the objects by name in a secure, reliable and
efficient way. The prime objective is to secure information from the users all the way to the
data and not just from the host or client-server communication, what transport layer security
(TLS) normally does.
Unlike TLS, which carries users all the way to the host or container, NDN takes us to the next
level and secures data from the user to the actual data. TLS only encrypts the channel and
does not encrypt from the user through the application to the data.
When you are encrypting at the data level, you no longer need middleboxes. Everything is
done in a single software stack that can be run everywhere.
25
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .