Anda di halaman 1dari 118

KAUNAS UNIVERSITY OF TECHNOLOGY

Faculty of Electrical and Electronics Engineering

H570B104 English Language (Level C1)

HOMEREADING
RTE-3 and RT-3 groups

Kaunas
2015

Skirmantas Astrauskas RTE-3

The Impact of Charging Plug-In Hybrid Electric


Vehicles on a Residential Distribution Grid
Kristien Clement-Nyns, Edwin Haesen, Student Member, IEEE, and Johan Driesen, Member, IEEE

AbstractAlternative vehicles, such as plug-in hybrid electric


vehicles, are becoming more popular. The batteries of these plug-in
hybrid electric vehicles are to be charged at home from a standard
outlet or on a corporate car park. These extra electrical loads have
an impact on the distribution grid which is analyzed in terms of
power losses and voltage deviations. Without coordination of the
charging, the vehicles are charged instantaneously when they are
plugged in or after a fixed start delay. This uncoordinated power
consumption on a local scale can lead to grid problems. Therefore,
coordinated charging is proposed to minimize the power losses and
to maximize the main grid load factor. The optimal charging profile
of the plug-in hybrid electric vehicles is computed by minimizing
the power losses. As the exact forecasting of household loads is not
possible, stochastic programming is introduced. Two main techniques are analyzed: quadratic and dynamic programming.
Index TermsCoordinated charging, distribution grid, dynamic programming, plug-in hybrid electric vehicles, quadratic
programming.

I. INTRODUCTION
YBRID electric vehicles (HEVs), battery electric vehicles (BEVs) and plug-in hybrid electric vehicles
(PHEVs) are becoming more popular. PHEVs are charged by
either plugging into electric outlets or by means of on-board
electricity generation. These vehicles can drive at full power in
electric-only mode over a limited range. As such, PHEVs offer
valuable fuel flexibility [1]. PHEVs may have a larger battery
and a more powerful motor compared to a HEV, but their range
is still very limited [2].
The charging of PHEVs has an impact on the distribution grid
because these vehicles consume a large amount of electrical energy and this demand of electrical power can lead to extra large
and undesirable peaks in the electrical consumption. There are
two main places where the batteries of PHEVs can be recharged:
either on a car park, corporate or public, or at home. The focus
in this article lies on the latter. The electrical consumption for
charging PHEVs may take up to 5% of the total electrical consumption in Belgium by 2030 [3]. For a PHEV with a range of
60 miles (100 km), this amount can increase to 8% taking into
account a utility factor which describes the fraction of driving
that is electrical [4].
From the distribution system operator point of view, the
power losses during charging are an economic concern and

Manuscript received May 27, 2009; revised July 27, 2009. First published
December 18, 2009; current version published January 20, 2010. Paper no.
TPWRS-00097-2009.
The authors are with the Department of Electrical Engineering, Katholieke
Universiteit Leuven, 3001 Heverlee, Belgium (e-mail: Kristien.Clement@esat.
kuleuven.be).
Digital Object Identifier 10.1109/TPWRS.2009.2036481

have to be minimized and transformer and feeder overloads


have to be avoided. Not only power losses, but also power
quality (e.g., voltage profile, unbalance, harmonics, etc.) are
essential to the distribution grid operator as well as to grid
customers. Voltage deviations are a power quality concern.
Too large voltage deviations cause reliability problems which
must be avoided to assure good operation of electric appliances. Overnight recharging can also increase the loading of
base-load power plants and smoothen their daily cycle or avoid
additional generator start-ups which would otherwise decrease
the overall efficiency [5]. From the PHEV owner point of view,
the batteries of the PHEV have to be charged overnight so the
driver can drive off in the morning with a fully-charged battery.
This gives opportunities for intelligent or smart charging. The
coordination of the charging could be done remotely in order
to shift the demand to periods of lower load consumption and
thus avoiding higher peaks in electricity consumption.
This research fits in a more global context where also other
new technologies, such as small wind turbines and photovoltaic
cells, are implemented in the distribution grid. In this optimization problem, only power losses and voltage deviations are minimized. Other aspects, e.g., power factor control, can be included
as well. The proposed methodology can help evaluating planned
grid reinforcements versus PHEV ancillary services to achieve
the most efficient grid operation.
This article wants to emphasize the improvements in power
quality that are possible by using coordinated charging or smart
metering. It also wants to indicate that uncoordinated charging
of PHEVs decreases the efficiency of the distribution grid.
II. ASSUMPTIONS AND MODELING
A. Load Scenarios
From an available set of residential load measurements [6],
two large groups of daily winter and summer load profiles are
selected. The load profiles cover 24 hours and the instantaneous
power consumption is given on a 15-min time base as shown in
Fig. 1 for an arbitrary day during winter.
B. Specifications of PHEVs
Each of the PHEVs has a battery with a maximum storage
capacity of 11 kWh [5]. Only 80% of the capacity of the battery
can be used to optimize life expectancy. This gives an available capacity of 8.8 kWh; 10 kWh is required from the utility
grid, assuming an 88% energy conversion efficiency from AC
energy absorbed from the utility grid to DC energy stored in
the battery of the vehicle [4]. The batteries can only be charged

0885-8950/$26.00 2009 IEEE

372

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

Fig. 3. IEEE 34-node test feeder [9].

Fig. 1. Household load during winter [6].

this figure, three important charging periods are proposed. The


first charging period is during the evening and night. Most of
the vehicles are at home from 21h00 until 06h00 in the morning.
Some PHEVs are immediately plugged in on return from work
in order to be ready to use throughout the evening. Thus the
second charging period takes place between 18h00 and 21h00.
This charging period coincides with the peak load during the
evening. The number of vehicles that will be charged during
this period will probably be smaller. One other charging period
is considered, that is charging during the day between 10h00 and
16h00. The charging will occur in small offices in urban areas.
It is assumed that only one vehicle per household or office can
be charged. The charging of multiple vehicles at a household
or office is not considered because it is not feasible to reflect
all possible scenarios. However, the proposed methods are also
valid for other periods and scenarios. In this article, the focus
lies on charging at home, in weaker non-dedicated distribution
grids.
D. Grid Topology

Fig. 2. Percentage of trips by vehicle each hour [8].

and not discharged, meaning that the energy flow is unidirectional and the vehicle-to-grid concept is not considered yet. The
charger has a maximum output power of 4 kW. The charger of 4
kW is chosen because the maximum power output of a standard
single phase 230 V outlet is 4.6 kW. Therefore, this is the largest
charger that can be used for a standard outlet at home without
reinforcing the wiring. Fast charging is not considered because
it requires a higher short-circuit power which is not available at
standard electric outlets in households. For fast charging, connections at a higher voltage level are indispensable. A higher
voltage connection could be installed, but this is an extra investment for the PHEV owner. The maximum penetration degree of PHEVs is 30% by 2030 for Belgium as predicted by the
Tremove model [7].
C. Charging Periods
It is not realistic to assume that PHEVs could be charged any
place a standard outlet is present. Therefore in this article, the
batteries of the vehicles are assumed to be charged at home.
Fig. 2 shows the percentage of all trips by vehicle each hour.
At that moment, they are not available for charging. Based on

The radial network used for this analysis is the IEEE 34-node
test feeder [9] shown in Fig. 3. The network is downscaled from
24.9 kV to 230 V so this grid topology represents a residential radial network. The line impedances are adapted to achieve
tolerable voltage deviations and power losses. Each node is a
connection with a residential load and some, randomly chosen,
nodes will have PHEVs charging.
E. Assumptions
The exact advantage of coordinated charging depends on the
assumptions made in this section. The household load profiles
are typical for Belgium. Other regions may have other load profiles because of different weather conditions, such as an air conditioning peak in the afternoon for warm regions. Some regions
will also have other grid voltages, such as 120 V. The IEEE grid
is an example of a distribution grid, so the obtained results are
only valid for this grid. The maximum power of the charger is
determined by the maximum power of a standard electric outlet.
Other parameters which affect the obtained results are the utility
load cycle of the base-load power plants, incentives and the use
of smart meters.
III. UNCOORDINATED CHARGING
At the moment, there is no smart metering system available
for PHEVs, so the vehicles will be charged without coordina-

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

CLEMENT-NYNS et al.: IMPACT OF CHARGING PLUG-IN HYBRID ELECTRIC VEHICLES ON A RESIDENTIAL DISTRIBUTION GRID

tion. Uncoordinated charging indicates that the batteries of the


vehicles are either start charging immediately when plugged in,
or after a user-adjustable fixed start delay. The vehicle owners
currently do not have the incentive nor the essential information
to schedule the charging of the batteries to optimize the grid utilization. The fixed start delay is introduced to give the vehicle
owner the possibility to start charging using off-peak electricity
tariffs.

373

TABLE I
RATIO OF POWER LOSSES TO TOTAL POWER [%] FOR THE 4 kW
CHARGER IN CASE OF UNCOORDINATED CHARGING

A. Load Flow Analysis


A load flow analysis is performed to assess the voltage deviations and the power losses in the selected distribution grid.
This analysis is based on the backward-forward sweep method
to calculate the node currents, line currents and node voltages
[10]. At the initialization step, a flat profile is taken for the node
voltages. A constant power load model is used at all connections at each time step. In the backward step, the currents are
computed based on the voltages of the previous iteration. In the
forward step, the voltages are computed based on the voltage
at the root node and the voltage drops of the lines between the
nodes. The currents and voltages are updated iteratively until the
stopping criterion based on node voltages is reached.
B. Methodology
At the start of a 24-h cycle, a daily profile is randomly selected from the available set belonging to a specific scenario
(winter, summer) and assigned at each node. For each scenario,
four cases depending on the penetration degree are selected. The
first case, with no PHEVs, is taken as a reference case. The next
cases have a PHEV penetration of, respectively, 10%, 20%, and
30% representing the proportion of nodes with a PHEV present.
The PHEVs are randomly placed. Separated runs are performed
for each charging period and the number of vehicles is varying
between 0% and 30% for each charging period.
The profile for charging the PHEVs is kept straightforward.
Each individual vehicle starts charging at a random time step
within a specific period of time such that the vehicles are still
fully charged at the end of the charging period. It is assumed
that the batteries of the vehicles are fully discharged at the first
time step. For every quarter of an hour, the backward-forward
sweep method is repeated to compute the voltage at each node
until convergence is obtained.
C. Results
The impact of uncoordinated charging on the distribution grid
is illustrated by computing the power losses and the maximum
voltage deviation for the different charging periods. The number
of samples is 1000, which is large enough to achieve an accurate
average per scenario. Taking more samples does not change the
results significantly.
The results for the 4 kW charger are shown in Tables I and II.
Table I depicts the ratio of the power losses to the total load. The
total load includes the daily household loads and the charging of
the PHEVs, if present. In all cases, the power losses are higher
in the winter season than in the summer season due to the higher
household loads. The increase of the number of PHEVs leads to
a significant increase in power losses. These power losses are
important for the operator of the distribution grid. The distribu-

TABLE II
MAXIMUM VOLTAGE DEVIATIONS [%] FOR THE 4 kW
CHARGER IN CASE OF UNCOORDINATED CHARGING

tion system operator (DSO) will compensate higher losses by


increasing its grid tariffs.
Not only the power losses, but also the voltage deviations of
the grid voltage (230 V) which are represented in Table II, are
important for the DSO. An increase in the number of PHEVs
leads to a significant increase in voltage deviations. According
to the mandatory EN50160 standard [11], voltage deviations up
to 10% in low voltage grids, for 95% of the time, are acceptable.
Table II shows that for a penetration of 30%, some of the voltage
deviations are close to 10%, especially during evening peak.
The power losses and the voltage deviations are the highest
while charging during the evening peak, between 18h00 and
21h00. The reasons are twofold. In the first place, this charging
period, wherein the batteries must be fully charged is rather
short, only 4 hours. Therefore, the charger output power must be
higher. Secondly, the household load during the evening is the
highest of the whole day and the output power of the charger is
added to the household loads. Charging during the day is a little
more demanding for the grid compared by charging overnight.
These results are directly related to Fig. 1.
Fig. 4 depicts the voltage profile in a node of the distribution
grid for a penetration degree of 0% and 30% during winter night.
This figure shows two charging examples and is not the average
of several samples. Clearly, there is a decrease of the voltage
in the presence of PHEVs during the charging period between
21h00 and 06h00. Between 23h00 and 04h00, most of the vehicles are charging and the voltage drop during these hours is the
largest and deviates the most from the 0% PHEV voltage profile.
The power needed for charging these vehicles is significantly
higher compared to the household loads during the night. The
small difference in voltage deviations during the rest of the day
is caused by the different load profiles selected for both cases.
IV. COORDINATED CHARGING
In the previous section, the charging of the batteries of the
PHEVs starts randomly, either immediately when they are

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

374

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

Fig. 4. Voltage profile in a node with 30% PHEVs compared to the voltage
profile with 0% PHEV.

plugged in, or after a fixed start delay. The idea of this section
is to achieve optimal charging and grid utilization to minimize
the power losses. A direct coordination of the charging will be
done by smart metering and by sending signals to the individual
vehicles.
This optimization problem can be tackled with the quadratic
programming technique (QP). This technique optimizes a
quadratic function of several variables, in this case the power
of the PHEV chargers at all time steps, which is subjected to
linear constraints. In this section, the QP technique is applied
to handle deterministic and stochastic household load profiles.
The objective is to minimize the power losses.
A. Optimization Problem
By minimizing the power losses, the owners of PHEVs will
no longer be able to control the charging profile. The only degree of freedom left for the owners is to indicate the point in
time when the batteries must be fully charged. For the sake of
convenience, the end of the indicated charging period is taken
as the point in time when the vehicles must be fully charged.
The charging power varies between zero and maximum and is
no longer constant. The coordinated charging is analyzed for the
same charging periods as described in the previous section. The
range of PHEV penetration levels remains the same, and the vehicles are also placed randomly. The same IEEE 34-node test
grid is used. For each charging period and season, the power
losses and voltage deviations are calculated and compared with
the values of uncoordinated charging.
B. Methodology
The objective is to minimize the power losses which are
treated as a reformulation of the nonlinear power flow equations. This nonlinear minimization problem can be tackled as
a sequential quadratic optimization [12]. The charging power
obtained by the quadratic programming cannot be larger than
. The batteries must be
the maximum power of the charger
fully charged at the end of cycle, so the energy which flows to
.
the batteries must equal the capacity of the batteries
is zero if there is no PHEV placed and is one if there is a PHEV

Fig. 5. Algorithm of coordinated charging.

at node . The goal is to minimize power losses while taking


into account these constraints. The quadratic programming
uses (1) and (2):
(1)

(2)

C. Deterministic Programming
Fig. 5 represents the outline of the algorithm of coordinated
charging. The vehicles are placed randomly after the selection
of a daily load profile and the number of PHEVs. A flat voltage
profile is assumed and the node voltages are computed with
the backward-forward sweep method assuming that there are
no PHEVs. The backward and forward sweep are formulated
as a matrix multiplication. The quadratic optimization is performed in order to determine the optimal charging profile. Then,
the node voltages are computed again. This process is repeated
until the power losses based stopping criterion is reached.
This paragraph describes the results of the coordinated
charging to illustrate the impact on the distribution grid.
Table III and Table IV represent respectively the power losses
and the maximum voltage deviations for the coordinated

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

CLEMENT-NYNS et al.: IMPACT OF CHARGING PLUG-IN HYBRID ELECTRIC VEHICLES ON A RESIDENTIAL DISTRIBUTION GRID

375

TABLE III
RATIO OF POWER LOSSES TO TOTAL POWER [%] FOR THE 4 kW
CHARGER IN CASE OF COORDINATED CHARGING

TABLE IV
MAXIMUM VOLTAGE DEVIATIONS [%] FOR THE 4 kW
CHARGER IN CASE OF COORDINATED CHARGING

Fig. 7. Load profile of the 4 kW charger for the charging period from 21h00
until 06h00 during winter.

when the household load peak occurs. The vehicles cause an


extra load during the off-peak hours to obtain the objective to
minimize the power losses. The voltage deviations during these
off-peak hours is smaller compared to the voltage deviations
due to the household loads during the evening peak. For a
vehicle penetration of 20% or more, the number of vehicles is
increased, and the charging is more distributed. Some vehicles
are charging during peak hours and this increases the voltage
deviation and thus lowers the voltage.
Fig. 7 shows the load profiles of the nodes 1 and 33 of Fig. 3
with a penetration degree of 30% during the charging period
from 21h00 until 06h00 during winter. The nodes are chosen at
the starting and end point of the grid feeder. It is clear that the
power output of the charger is not constantly 4 kW, but varies.
D. Stochastic Programming

Fig. 6. Voltage profile in a node with 30% and 10% PHEVs compared to the
voltage profile with 0% PHEV for coordinated charging.

charging during the different charging periods. These results


must be compared with Table I and Table II. For all charging
periods and seasons, the power losses are decreasing if the
coordinated charging is applied. The voltage deviations are in
accordance with EN50160 standard and the maximum voltage
deviation for a penetration degree of 30% is now well below
10%. If there are no PHEVs present, charging during the day
and the night gives more or less the same results. However, if
the number of PHEVs is increased, the voltage deviations and
the power loss increases are larger for charging during the day
than during the night.
Fig. 6 shows that the maximum voltage deviation during
overnight charging when no PHEVs are involved, occurs at the
beginning of the charging period when the household loads
are still high. A penetration degree of 10% gives the same
voltage deviations, meaning that the vehicles are not charged

The previous results are based on deterministic or historical


data for the daily load profiles. So the essential input parameters are fixed. For this approach, a sufficient number of measurement data must be available. Most of the time, however, these
measurements are not adequate to perform a perfect forecasting
of the data. A stochastic approach in which an error in the forecasting of the daily load profiles is considered, is therefore more
realistic.
The daily load profiles are the essential input parameters. The
uncertainties of these parameters can be described in terms of
probability density functions. In that way, the fixed input parameters are converted into random input variables with normal
distributions assumed at each node. independent samples of
the random input variable , the daily load profile, are selected.
Equation (3) gives the estimation for the stochastic optimum
. The function
gives the power losses and
is
the power rate of the charger for all the PHEVs and time steps.
is a sample-average approximation of the objective of the
stochastic programming problem:

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

(3)

376

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

The mean value of the power losses,


, is a lower
bound for the real optimal value of the stochastic programming
[13], as shown in (4):
problem,
(4)
can be estimated by generating
independent samof the random input variable each of size .
opples
timization runs are performed in which the nonlinear power
flow equations are solved by using the backward-forward sweep
is the mean optimal value of the
method. According to (5),
samples. The optimal values of the
problem for each of the
samples constitute a normal distribution:

(5)

Fig. 8. Histogram of the efficiency loss of an arbitrary day during winter for a
variation of 5%.

is an unbiased estimator of
. Simulations
In (6),
indicate that in this type of problem, the lower bound converges
to the real optimal value when is sufficiently high:
(6)
A forecasting model for the daily load profile for the next
24 h is required. The daily load profiles of the available set are
varied by a normal distribution function. The standard deviation
is determined in such a way that 99.7% of the samples vary at
maximum 5% or 25% of the average .
E. Results
For 2000 independent samples of the daily load profile, one
optimal charging profile is calculated. This optimal charging
profile is used to determine the power losses for the 2000 individual load profiles. This is the stochastic optimum. For each
of these 2000 load profiles, the optimal charging profile and the
corresponding power losses are also computed, which is the deterministic optimum.
The power losses of the deterministic optimum are subtracted
from the power losses of the stochastic optimum and divided by
. This is shown for a
the deterministic optimum, defined as
variation of the household loads of 5 and 25% in Figs. 8 and 9,
respectively. The value of this difference is always positive. The
forecasting of the daily load profiles introduces an efficiency
loss because the charge profiles of the PHEVs are not optimal
for this specific daily load profile. If the standard deviation of
the normal distribution and thus the variation of the household
load is reduced, the 2000 charge profiles of the deterministic
optimum will converge to the optimal charge profile. The efficiency loss will also reduce indicating that the power losses of
the differences will go down by a factor 25 as shown in Fig. 8
compared to Fig. 9.
In general, the difference between the power losses of the
stochastic and the deterministic optimum is rather small. It is
clear that the error in forecasting does not have a large impact on the power losses. The daily household load profiles
during the winter season show the same trend each day during

Fig. 9. Histogram of the efficiency loss of an arbitrary day during winter for a
variation of 25%.

Fig. 10. Deterministic optimum and optimal charger profile for node 33.

winter season resulting in a optimal charge profile which resembles a deterministic charge profile of a specific day as shown in
Fig. 10 for the last node of the test grid. Both charge profiles
have the same trend. Therefore, the contrast in terms of power

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

CLEMENT-NYNS et al.: IMPACT OF CHARGING PLUG-IN HYBRID ELECTRIC VEHICLES ON A RESIDENTIAL DISTRIBUTION GRID

377

The function represents the total optimal power losses from


is a Q-dimensional
period to the last period . The vector
vector of the possible storage levels at time . is the power
loss during period and
is the battery content of the th vehicle at time stage . The power of the chargers is represented
and is also a -dimensional vector. So the first compoby
nent of this vector gives the power of the charger for the first
PHEV. The output of the charger is not continuous, but has a
step size of 400 W. This is relatively large, but smaller step sizes
would lead to too much computational time which is propor[14]. As such, the battery content is also discrete.
tional to
The constraints of the problem remain the same and are shown
in (9)(11):
(9)
(10)
(11)

Fig. 11. Histogram of the efficiency loss of an arbitrary day during winter for
other household profiles.

losses between the deterministic and stochastic optimum is not


large. However, the difference between uncoordinated and coordinated charging is much larger because the charge profiles
are more different. The uncoordinated charging has a constant
charge profile for a specific amount of time.
In Fig. 8 and 9, a specific household load profile is assumed
which is varied by a normal distribution function. In Fig. 11, the
load profiles are randomly selected out of a database of household load profiles. This database contains profiles that differ
more each day and are more peaked which increases the efficiency losses.
V. DYNAMIC PROGRAMMING
The optimal coordination of charging PHEVs can also be
tackled by the dynamic programming technique (DP). The QP
and DP techniques are compared with respect to results, storage
requirements and computational time. The DP technique decomposes the original optimization problem into a sequence
of subproblems which are solved backward over each stage. A
classical implementation of the DP technique is the shortest path
problem. For the application of this article, the model is represented as a series of plug-in hybrid electric vehicles.
A. Methodology
vehicles with batteries charging and the maxThere are
imum value of corresponds with a penetration degree of 30%.
vehicles at each stage is the
The battery content of these
state variable,
. The number of stages , is the number
of hours of the charging period multiplied by four because the
household loads are available on a 15-min time base.
The backward recursive equations for the conventional dynamic programming technique are given in (7) and (8):
(7)

The power loss is the objective function which must be minis a Q-dimensional vector and
imized. The storage vector
thus the curse of dimensionality [15] arises which is handled
by modifying the original dynamic programming technique.
The dynamic programming technique successive approximation (DPSA) decomposes the multidimensional problem in a
sequence of one-dimensional problems which are much easier
to handle [16]. The optimizations occur one variable at a time
while holding the other variables at a constant value. All the
variables are evaluated that way. This technique converges to a
optimum for convex problems. This method will be used for the
deterministic and stochastic programming.
B. Deterministic Programming
A daily load profile of the selected season is chosen and the
vehicles are placed randomly. The DPSA technique needs initial
values of the state variables to start the iteration. These values
are generated by calculating the optimal charge trajectory for
each PHEV separately without considering the other PHEVs.
These optimal trajectories are put together into one temporary
optimal trajectory and thus one -dimensional state vector. All
the components of the state vector are held constant, except the
first one. The optimal charge trajectory for the first component
of the state variable is defined. The new value is ascribed to the
first component and the procedure continues until the last component of the state vector is optimized. This procedure is repeated until convergence is obtained. The problem is switched
from a multidimensional problem to a sequence of one-dimensional problems. The algorithm of dynamic programming successive approximation is represented in Fig. 12.
C. Stochastic Programming
The uncertainties of the household loads must also be implemented in the DP technique. Two thousand stochastic household
load profiles are generated and the mean power losses of these
loads are used to determine the total power losses as presented
in (12):

(8)
Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

(12)

378

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

Fig. 13. Charge profile for node 1 for the QP and the DP program technique.
TABLE V
POWER QUALITY AND LOSSES FOR THE TEST GRID

Fig. 12. Algorithm of DPSA charging.

profile are continuous in that case. The DP technique, where


a step size of 400 W is introduced for the power of the charger,
gives a discrete charge profile. Reducing the step to an infinitesimal value would give the same result as the QP technique.
This step size is taken rather large in order to reduce the number
of levels and thus computational time and storage requirements.
The storage requirements are heavier for the DP technique compared to the QP technique because every possible path over each
stage must be stored. Since this leads to very large matrices and
increased computational time, the DP technique is slower.
VI. IMPACT ON THE DISTRIBUTION GRID

The same stochastic load profiles as produced in the stochastic programming of the QP technique are applied to make
the comparison more clear. One optimal charge profile is generated for these 2000 stochastic household loads with the DPSA
technique. The power losses are calculated separately for the
2000 household load profiles and the single optimal charge
profile. This is the stochastic optimum. For the deterministic
optimum, the optimal charge profile and power losses are determined for each of the 2000 stochastic household load profiles,
giving 2000 optimal charge profiles. The power losses of the
deterministic optimum are subtracted from the power losses
of the stochastic optimum and divided by the deterministic
optimum for a variation of the household loads of 5 and 25%.
D. Results
In Fig. 13, the charge profiles for the QP and DP technique are
compared. In general, the difference between the results of the
DP and QP techniques is negligible, although the QP technique
gives more accurate results because the values of the charge

Uncoordinated charging of the batteries of PHEVs has a nonnegligible impact on the performance of the distribution grid in
terms of power losses and power quality. Both power quality and
power losses are represented in Table V for three cases: without
PHEVs, uncoordinated and coordinated charging. The power
quality is given as the average of 1000 samples of the maximum
load, voltage drop and line current for the IEEE 34-node test
feeder during winter season for a penetration degree of 30%.
The power losses are the ratio of the power losses to the total
load. With respect to uncoordinated charging, the coordination of
the charging reduces the power losses. Power quality is improved
to a level which is similar to the case where no PHEVs are present.
Because the extra loads for charging PHEVs remain in the case
of coordinated charging, additional losses are still higher.
The coordination of the charging can be done by a smart metering system. The distribution grid must be enforced to cope
with the increased loads and voltage drops caused by charging
PHEVs if this coordination system is not applied. Both scenarios
will introduce extra costs for the distribution system operators
and eventually for the customers.

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

CLEMENT-NYNS et al.: IMPACT OF CHARGING PLUG-IN HYBRID ELECTRIC VEHICLES ON A RESIDENTIAL DISTRIBUTION GRID

A global estimation is performed in order to indicate the level


of upgrading needed for a small distribution grid. For the argumentation, the IEEE 34-node test feeder is connected to each
phase of a three phase transformer of 100 kVA, forming a global
grid of 100 nodes. When no PHEVs are present, the maximum
load for the three grids together is 69 kVA. Considering no
PHEVs in the future, the transformer has enough reserve capacity for this global grid to meet additional peak load and load
growth for the next ten years, which is predicted to be a few
Aluminum underground conductor of
percent. A
400 V is standard. The maximum capacity of these conductors
is about 160 A [17]. For the case without PHEVs, the standard
underground conductor would be sufficient.
If 30% PHEVs are introduced, the power for the global grid
increases to 108 kVA, which is out of range for the 100 kVA
transformer. This transformer must be replaced by a standard
transformer of 125 kVA to deal with extra PHEVs, load growth
and additional peak load. Due to the PHEVs, the line current
increases to 163 A. The maximum capacity of the current conductor is not enough and must be replaced by a
Aluminum underground conductor with a capacity of 220 A.
Voltage deviations up to 10% in low voltage grids are acceptable for 95% of the time according to the EN50160 standard
which is mandatory in Belgium. In the case of uncoordinated
charging, this limit has been reached for charging during the
evening and action must be taken to reduce the voltage drop.
The problem of the voltage drop can be tackled by placing a capacitor bank or a load tap changing transformer. Although the
latter is not common at low voltages, it may be necessary in
the future, especially for the vehicle-to-grid concept. This type
of transformer can handle voltage variations of plus and minus
10% by adjusting among 32 tap settings built into the windings [18]. There is also another cost involved: the power losses.
These losses increase reasonably in the case of uncoordinated
charging. The power losses and loads must also be produced
and transported over the transmission lines.
A smart metering system must be implemented to control
the coordination and communication between the PHEVs individually, the distribution system operator and the transmission
system operator (TSO). The vehicles could also be grouped and
represented by a fleet manager to communicate with the DSO
and TSO. Smart metering will lead to opportunities to make
PHEVs a controllable load, to apply the vehicle-to-grid concept
and to combine PHEVs and renewable energy. This technology
is available for implementation, but capital investments by the
utilities are necessary [19]. For the implementation of smart metering, also other incentives, such as real-time pricing and integration of renewable energy, are important.
Less grid enforcements are necessary with the coordination
system. The maximum load is lower because the vehicles are
not charging if the household loads are peaking. Therefore, the
voltage drops, line currents and power losses are considerably
reduced. The cost of upgrading the grid must be compared with
the cost of the execution of smart metering. In both cases, the
cost for the implementation and the possible additional power
production will be passed on to the customers. In practice, it
would be no difference for the DSOs which technology is implemented, as they are allowed to have a fair rate of return in a

379

cost plus mechanism. With this mechanism, the DSOs are not
strongly pushed towards the use of the most efficient technologies. The tariffs and the performance of the grid are more important in a price cap mechanism. The realization is favorable
if the smart metering system helps a significant deferral of grid
investments compared to the enhancement of the grid.
VII. CONCLUSION
In general, coordinated charging of plug-in hybrid electric vehicles can lower power losses and voltage deviations by flattening out peak power. However, when the choice of charging
periods is rather arbitrary, the impact of the PHEV penetration
level is large. The implementation of the coordinated charging
is not without costs.
In the first stage, historical data are used so there is a perfect
knowledge of the load profiles. In a second stage, stochastic programming is introduced to represent an error in the forecasting
which increase the power losses. This efficiency loss is rather
small if the trend of the household load profiles is known, so
charging during the peak load of the evening can be avoided.
These results are obtained with the quadratic programming
technique. The dynamic programming technique is also implemented but does not improve the computational time nor the
achieved accuracy. The applied techniques and methods can be
extended to other objective functions, such as voltage control by
PHEV reactive power output control and grid balancing.
REFERENCES
[1] A. Raskin and S. Shah, The Emerge of Hybrid Electric Vehicles. New
York: Alliance Bernstein, 2006. [Online]. Available: http://www.calcars.org/alliance-bernstein-hybrids-june06.pdf.
[2] Anderman, The challenge to fulfil electrical power requirements of
advanced vehicles, J. Power Sources, vol. 127, no. 12, pp. 27, Mar.
2004.
[3] K. Clement, K. Van Reusel, and J. Driesen, The consumption of
electrical energy of plug-in hybrid electric vehicles in Belgium, in
Proc. 2nd Eur. Ele-Drive Transportation Conf., Brussels, Belgium,
May 2007.
[4] M. Duvall and E. Knipping, Environmental Assessment of Plug-In
Hybrid Electric Vehicles, Volume 1: National Wide Greenhouse Gas
Emissions, EPRI, 2007, Tech. Rep.
[5] P. Denholm and W. Short, An Evaluation of Utility System Impacts
and Benefits of Optimally Dispatched Plug-In Hybrid Electric Vehicles, Oct. 2006, Tech. Rep.
[6] Vlaamse Reguleringsinstantie voor de Elektriciteits- en Gasmarkt (VREG), De verbruiksprofielen van huishoudelijke en niethuishoudelijke elektriciteitsverbruikers voor het jaar 2007. [Online].
Available: http://www.vreg.be.
[7] S. Logghe, B. Van Herbruggen, and B. Van Zeebroeck, Emissions of
road traffic in Belgium, tmleuven, Tremove, Jan. 2006.
[8] Nationaal Wetenschappelijke Instituut voor Verkeersveiligheidsonderzoek (SWOV), Onderzoek verplaatsingsgedrag (ovg), 2007. [Online].
Available: http://www.swov.nl.
[9] W. H. Kersting, Radial distribution test feeders, in Proc. IEEE Power
Eng. Soc. Winter Meeting, Jan. 28Feb. 1, 2001. [Online]. Available:
http://ewh.ieee.org/soc/pes/dsacom/.
[10] W. Kersting, Distribution System Modeling and Analysis. Boca
Raton, FL: CRC, 2002.
[11] En 50160, Voltage Characteristics of Electricity Supplied by Public
Distribution Systems, 1999.
[12] E. Haesen, J. Driesen, and R. Belmans, Robust planning methodology
for integration of stochastic generators in distribution grids, IET J.
Renew. Power Gen., vol. 1, no. 1, pp. 2532, Mar. 2007.
[13] J. Linderoth, A. Shapiro, and S. Wright, The empirical behavior of
sampling methods for stochastic programming, Ann. Oper. Res., vol.
142, no. 1, pp. 215241, Feb. 2006.
[14] J. Labadie, Optimal operation of multireservoir systems: State-of-the
art review, J. Water Resource Plan. Manage.-ASCE, vol. 130, no. 2,
pp. 93111, Mar.Apr. 2004.

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

380

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 25, NO. 1, FEBRUARY 2010

[15] R. Bellman, Adaptive Control Processes: A Guided Tour. Princeton,


NJ: Princeton Univ. Press, 1962.
[16] J. Yi, J. Labadie, and S. Stitt, Dynamic optimal unit commitment and
loading in hydropower systems, J. Water Resource Plan. Manage.ASCE, vol. 129, no. 5, pp. 388398, Sep.Oct. 2003.
[17] S. S. J. P. Green and G. Strbac, Evaluation of electricity distribution
system design strategies, Proc. Inst. Elect. Eng., Gen., Transm., Distrib., vol. 146, no. 1, pp. 5360, Jan. 1999.
[18] H. L. Willis, Power Distribution Planning Reference Book. New
York: Marcel Dekker, 1997.
[19] U.S. Departement of Energy, Summary Report: Discussion Meeting on
Plug-In Hybrid Electric Vehicles, 2006, Tech. Rep.

Kristien Clement-Nyns received the M.S. degree


in electro-mechanical engineering in 2004 with specialization in energy. Currently, she is pursuing the
Ph.D. degree in electrotechnical engineering at the
Katholieke Universiteit Leuven division ELECTA,
Heverlee, Belgium.
Her research interests include hybrid and electric
vehicles.

Edwin Haesen (S05) received the M.Sc. degree in


electrical engineering at the Katholieke Universiteit
Leuven (KU Leuven), Heverlee, Belgium, in 2004.
He is currently pursuing the Ph.D. degree at the KU
Leuven.
He is a Research Assistant at KU Leuven in the
division ESAT-ELECTA. His research interests are in
the domain of power system analysis and distributed
generation.
Mr. Haesen received the European Talent Award
for Innovative Energy Systems 2005 for his M.Sc.
thesis on Technical aspects of congestion management.

Johan Driesen (S93M97) received the M.Sc.


and Ph.D. degrees in electrical engineering from
Katholieke Universiteit Leuven (KU Leuven),
Heverlee, Belgium, in 1996 and 2000, respectively.
Currently, he is an Associate Professor with KU
Leuven and teaches power electronics and drives. In
2000, he was with the Imperial College of Science,
Technology and Medicine, London, U.K. In 2002,
he was with the University of California, Berkeley.
Currently, he conducts research on distributed generation, power electronics, and its applications.

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on February 2, 2010 at 11:16 from IEEE Xplore. Restrictions apply.

base load - the constant or permanent load on a power supply - bazines apkrovos
feasible - capable of being done, effected, or accomplished - ivykdomas
dedicated - made or designed to interconnect exclusively with one model or a limited range of models in a
manufacturer's line - specialus
Load Flow - - apkrovos srautas
peak load - the maximum load on an electrical power-supply system Compare - pikine apkrova.

Kaunas University of Technology


Student: Tomas Aurila RTE-3 gr.
Lecturer: Evelina Jaleniauskien

Article
Fiber Optic Cables
An optical fiber (or optical fibre) is a flexible, transparent fiber made of extruded glass
(silica) or plastic, slightly thicker than a human hair. It can function as a waveguide, or light pipe,
to transmit light between the two ends of the fiber.

The field of applied science and

engineering concerned with the design and application of optical fibers is known as fiber optics.
Optical fibers are widely used in fiber-optic communications, where they permit
transmission over longer distances and at higherbandwidths (data rates) than wire cables. Fibers are
used instead of metal wires because signals travel along them with less loss and are also immune
to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so
that they may be used to carry images, thus allowing viewing in confined spaces. Specially designed
fibers are used for a variety of other applications, includingsensors and fiber lasers.
Optical

fibers

typically

include

transparent

core

surrounded

by

transparent cladding material with a lower index of refraction. Light is kept in the core by total internal
reflection. This causes the fiber to act as a waveguide. Fibers that support many propagation paths or
transverse modes are called multi-mode fibers (MMF), while those that only support a single mode
are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter, and
are used for short-distance communication links and for applications where high power must be
transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters.
Joining lengths of optical fiber is more complex than joining electrical wire or cable.
The ends of the fibers must be carefully cleaved, and then carefully spliced together with the cores
perfectly aligned. A mechanical splice holds the ends of the fibers together mechanically, while fusion
splicing uses heat to fuse the ends of the fibers together. Special optical fiber connectors for temporary
or semi-permanent connections are also available.

HISTORY
Guiding of light by refraction, the principle that makes fiber optics possible, was first
demonstrated by Daniel Colladon and Jacques Babinet in Paris in the early 1840s. John
Tyndall included a demonstration of it in his public lectures in London, 12 years later. Tyndall also
wrote about the property of total internal reflection in an introductory book about the nature of light
in 1870.
When

the

light

passes

from

air

into

water,

the

refracted

ray

is

bent towards the perpendicular... When the ray passes from water to air it is bent from the
perpendicular... If the angle which the ray in water encloses with the perpendicular to the surface be
greater than 48 degrees, the ray will not quit the water at all: it will be totally reflected at the surface....
The angle which marks the limit where total reflection begins is called the limiting angle of the
medium. For water this angle is 4827', for flint glass it is 3841', while for diamond it is 2342'.
Unpigmented human hairs have also been shown to act as an optical fiber.
Practical applications, such as close internal illumination during dentistry, appeared
early in the twentieth century. Image transmission through tubes was demonstrated independently by
the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s.
The principle was first used for internal medical examinations by Heinrich Lamm in the following
decade. Modern optical fibers, where the glass fiber is coated with a transparent cladding to offer a
more suitable refractive index, appeared later in the decade. Development then focused on fiber
bundles for image transmission. Harold Hopkins and Narinder Singh Kapany at Imperial College in
London achieved low-loss light transmission through a 75 cm long bundle which combined several
thousand fibers. Their article titled "A flexible fibrescope, using static scanning" was published in the
journal Nature in 1954. The first fiber optic semi-flexible gastroscope was patented by Basil
Hirschowitz, C. Wilbur Peters, and Lawrence E. Curtiss, researchers at the University of Michigan,
in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers;
previous optical fibers had relied on air or impractical oils and waxes as the low-index cladding
material. In 1880 Alexander Graham Bell and Sumner Tainter invented the Photophone at the Volta
Laboratory in Washington, D.C., to transmit voice signals over an optical beam. It was an advanced
form of telecommunications, but subject to atmospheric interferences and impractical until the secure
transport of light that would be offered by fiber-optical systems. In the late 19th and early 20th
centuries, light was guided through bent glass rods to illuminate body cavities.

Jun-ichi Nishizawa, a Japanese scientist at Tohoku University, also proposed the use of
optical fibers for communications in 1963, as stated in his book published in 2004 in India. Nishizawa
invented other technologies that contributed to the development of optical fiber communications,
such as the graded-index optical fiber as a channel for transmitting light from semiconductor
lasers. The first working fiber-optical data transmission system was demonstrated by German
physicist Manfred Brner at Telefunken Research Labs in Ulm in 1965, which was followed by the
first patent application for this technology in 1966. Charles K. Kao and George A. Hockham of the
British company Standard Telephones and Cables (STC) were the first to promote the idea that
the attenuation in optical fibers could be reduced below 20 decibels per kilometer (dB/km), making
fibers a practical communication medium. They proposed that the attenuation in fibers available at
the time was caused by impurities that could be removed, rather than by fundamental physical effects
such as scattering. They correctly and systematically theorized the light-loss properties for optical
fiber, and pointed out the right material to use for such fibers silica glass with high purity. This
discovery earned Kao the Nobel Prize in Physics in 2009.
NASA used fiber optics in the television cameras that were sent to the moon. At the
time, the use in the cameras was classified confidential, and only those with sufficient security
clearance or those accompanied by someone with the right security clearance were permitted to
handle the cameras.[18]
The crucial attenuation limit of 20 dB/km was first achieved in 1970, by
researchers Robert D. Maurer, Donald Keck, Peter C. Schultz, and Frank Zimar working for
American glass maker Corning Glass Works, now Corning Incorporated. They demonstrated a fiber
with 17 dB/km attenuation by doping silica glass with titanium. A few years later they produced a
fiber with only 4 dB/km attenuation using germanium dioxide as the core dopant. Such low
attenuation ushered in the era of optical fiber telecommunication. In 1981,General Electric produced
fused quartz ingots that could be drawn into strands 25 miles (40 km) long.
Attenuation in modern optical cables is far less than in electrical copper cables, leading
to long-haul fiber connections with repeater distances of 70150 kilometers. The erbium-doped fiber
amplifier, which reduced the cost of long-distance fiber systems by reducing or eliminating opticalelectrical-optical repeaters, was co-developed by teams led by David N. Payne of the University of
Southampton and Emmanuel Desurvire at Bell Labs in 1986. Robust modern optical fiber uses glass
for both core and sheath, and is therefore less prone to aging. It was invented by Gerhard Bernsee
of Schott Glass in Germany in 1973.

The emerging field of photonic crystals led to the development in 1991 of photoniccrystal fiber,[21] which guides light by diffraction from a periodic structure, rather than by total internal
reflection. The first photonic crystal fibers became commercially available in 2000. Photonic crystal
fibers can carry higher power than conventional fibers and their wavelength-dependent properties can
be manipulated to improve performance.

Communication
Optical fiber can be used as a medium for telecommunication and computer
networking because it is flexible and can be bundled as cables. It is especially advantageous for longdistance communications, because light propagates through the fiber with little attenuation compared
to electrical cables. This allows long distances to be spanned with fewrepeaters.
The per-channel light signals propagating in the fiber have been modulated at rates as
high as 111 gigabits per second (Gbit/s) by NTT, although 10 or 40 Gbit/s is typical in deployed
systems. In June 2013, researchers demonstrated transmission of 400 Gbit/s over a single channel
using 4-mode orbital angular momentum multiplexing.\
Each fiber can carry many independent channels, each using a different wavelength of
light (wavelength-division multiplexing (WDM)). The net data rate (data rate without overhead bytes)
per fiber is the per-channel data rate reduced by the FEC overhead, multiplied by the number of
channels (usually up to eighty in commercial dense WDMsystems as of 2008). As of 2011 the record
for bandwidth on a single core was 101 Tbit/s (370 channels at 273 Gbit/s each). The record for a
multi-core fiber as of January 2013 was 1.05 petabits per second. In 2009, Bell Labs broke the 100
(petabit per second)kilometer barrier (15.5 Tbit/s over a single 7000 km fiber).
For short distance application, such as a network in an office building, fiber-optic
cabling can save space in cable ducts. This is because a single fiber can carry much more data than
electrical cables such as standard category 5 Ethernet cabling, which typically runs at 100 Mbit/s or
1 Gbit/s speeds. Fiber is also immune to electrical interference; there is no cross-talk between signals
in different cables, and no pickup of environmental noise. Non-armored fiber cables do not conduct
electricity, which makes fiber a good solution for protecting communications equipment in high
voltage environments, such as power generation facilities, or metal communication structures prone
to lightning strikes. They can also be used in environments where explosive fumes are present,
without danger of ignition. Wiretapping (in this case, fiber tapping) is more difficult compared to
electrical connections, and there are concentric dual-core fibers that are said to be tap-proof.
4

Fibers are often also used for short-distance connections between devices. For example,
most high-definition televisions offer a digital audio optical connection. This allows the streaming of
audio over light, using the TOSLINK protocol.

Sensors
Fibers have many uses in remote sensing. In some applications, the sensor is itself an
optical fiber. In other cases, fiber is used to connect a non-fiberoptic sensor to a measurement system.
Depending on the application, fiber may be used because of its small size, or the fact that no electrical
power is needed at the remote location, or because many sensors can be multiplexed along the length
of a fiber by using different wavelengths of light for each sensor, or by sensing the time delay as light
passes along the fiber through each sensor. Time delay can be determined using a device such as
an optical time-domain reflectometer.
Optical fibers can be used as sensors to measure strain, temperature, pressure and other
quantities

by

modifying

fiber

so

that

the

property

to

measure

modulates

the intensity,phase, polarization, wavelength, or transit time of light in the fiber. Sensors that vary the
intensity of light are the simplest, since only a simple source and detector are required. A particularly
useful feature of such fiber optic sensors is that they can, if required, provide distributed sensing over
distances of up to one meter. In contrast, highly localized measurements can be provided by
integrating miniaturized sensing elements with the tip of the fiber. These can be implemented by
various micro- and nanofabrication technologies, such that they do not exceed the microscopic
boundary of the fiber tip, allowing such applications as insertion into blood vessels via hypodermic
needle.
Extrinsic fiber optic sensors use an optical fiber cable, normally a multi-mode one, to
transmit modulated light from either a non-fiber optical sensoror an electronic sensor connected to
an optical transmitter. A major benefit of extrinsic sensors is their ability to reach otherwise
inaccessible places. An example is the measurement of temperature inside aircraft jet engines by
using a fiber to transmit radiation into a radiation pyrometer outside the engine. Extrinsic sensors can
be used in the same way to measure the internal temperature of electrical transformers, where the
extreme electromagnetic fields present make other measurement techniques impossible. Extrinsic
sensors measure vibration, rotation, displacement, velocity, acceleration, torque, and twisting. A solid
state version of the gyroscope, using the interference of light, has been developed. The fiber optic
gyroscope (FOG) has no moving parts, and exploits the Sagnac effect to detect mechanical rotation.
5

Common uses for fiber optic sensors includes advanced intrusion detection security
systems. The light is transmitted along a fiber optic sensor cable placed on a fence, pipeline, or
communication cabling, and the returned signal is monitored and analysed for disturbances. This
return signal is digitally processed to detect disturbances and trip an alarm if an intrusion has occurred.

Power transmission
Optical fiber can be used to transmit power using a photovoltaic cell to convert the light
into electricity. While this method of power transmission is not as efficient as conventional ones, it
is especially useful in situations where it is desirable not to have a metallic conductor as in the case
of use near MRI machines, which produce strong magnetic fields.[34] Other examples are for powering
electronics in high-powered antenna elements and measurement devices used in high-voltage
transmission equipment.

Multi-mode fiber
Fiber with large core diameter (greater than 10 micrometers) may be analyzed
by geometrical optics. Such fiber is called multi-mode fiber, from the electromagnetic analysis (see
below). In a step-index multi-mode fiber, rays of light are guided along the fiber core by total internal
reflection. Rays that meet the core-cladding boundary at a high angle (measured relative to a
line normal to the boundary), greater than the critical angle for this boundary, are completely
reflected. The critical angle (minimum angle for total internal reflection) is determined by the
difference in index of refraction between the core and cladding materials. Rays that meet the boundary
at a low angle are refracted from the core into the cladding, and do not convey light and hence
information along the fiber. The critical angle determines the acceptance angle of the fiber, often
reported as a numerical aperture. A high numerical aperture allows light to propagate down the fiber
in rays both close to the axis and at various angles, allowing efficient coupling of light into the fiber.
However, this high numerical aperture increases the amount of dispersion as rays at different angles
have different path lengths and therefore take different times to traverse the fiber.

Optical fiber types


In graded-index fiber, the index of refraction in the core decreases continuously between
the axis and the cladding. This causes light rays to bend smoothly as they approach the cladding,
rather than reflecting abruptly from the core-cladding boundary.
The resulting curved paths reduce multi-path dispersion because high angle rays pass
more through the lower-index periphery of the core, rather than the high-index center. The index
profile is chosen to minimize the difference in axial propagation speeds of the various rays in the
fiber. This ideal index profile is very close to a parabolic relationship between the index and the
distance from the axis.

Light scattering
The propagation of light through the core of an optical fiber is based on total internal
reflection of the lightwave. Rough and irregular surfaces, even at the molecular level, can cause light
rays to be reflected in random directions. This is called diffuse reflection or scattering, and it is
typically characterized by wide variety of reflection angles.
Light scattering depends on the wavelength of the light being scattered. Thus, limits to
spatial scales of visibility arise, depending on the frequency of the incident light-wave and the
physical dimension (or spatial scale) of the scattering center, which is typically in the form of some
specific micro-structural feature. Since visible light has a wavelength of the order of
one micrometer (one millionth of a meter) scattering centers will have dimensions on a similar spatial
scale.Thus,

attenuation

results

from

the

incoherent

scattering

of

light

at

internal surfaces and interfaces. In (poly)crystalline materials such as metals and ceramics, in addition
to pores, most of the internal surfaces or interfaces are in the form of grain boundaries that separate
tiny regions of crystalline order. It has recently been shown that when the size of the scattering center
(or grain boundary) is reduced below the size of the wavelength of the light being scattered, the
scattering no longer occurs to any significant extent. This phenomenon has given rise to the
production of transparent ceramic materials.
Similarly, the scattering of light in optical quality glass fiber is caused by molecular
level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of
thought is that a glass is simply the limiting case of a polycrystalline solid. Within this framework,
"domains" exhibiting various degrees of short-range order become the building blocks of both metals
7

and alloys, as well as glasses and ceramics. Distributed both between and within these domains are
micro-structural defects that provide the most ideal locations for light scattering. This same
phenomenon is seen as one of the limiting factors in the transparency of IR missile domes.

Cable construction
In practical fibers, the cladding is usually coated with a tough resin buffer layer, which
may be further surrounded by a jacket layer, usually glass. These layers add strength to the fiber but
do not contribute to its optical wave guide properties. Rigid fiber assemblies sometimes put lightabsorbing ("dark") glass between the fibers, to prevent light that leaks out of one fiber from entering
another. This reduces cross-talk between the fibers, or reduces flare in fiber bundle imaging
applications.
Modern cables come in a wide variety of sheathings and armor, designed for
applications such as direct burial in trenches, high voltage isolation, dual use as power
lines, installation in conduit, lashing to aerial telephone poles, submarine installation, and insertion
in paved streets. The cost of small fiber-count pole-mounted cables has greatly decreased due to the
high demand for fiber to the home (FTTH) installations in Japan and South Korea.
Fiber cable can be very flexible, but traditional fiber's loss increases greatly if the fiber
is bent with a radius smaller than around 30 mm. This creates a problem when the cable is bent around
corners or wound around a spool, making FTTX installations more complicated. "Bendable fibers",
targeted towards easier installation in home environments, have been standardized as ITU-T G.657.
This type of fiber can be bent with a radius as low as 7.5 mm without adverse impact. Even more
bendable fibers have been developed. Bendable fiber may also be resistant to fiber hacking, in which
the signal in a fiber is surreptitiously monitored by bending the fiber and detecting the leakage.[64]
Another important feature of cable is cable's ability to withstand horizontally applied
force. It is technically called max tensile strength defining how much force can be applied to the cable
during the installation period.
Some fiber optic cable versions are reinforced with aramid yarns or glass yarns as
intermediary strength member. In commercial terms, usage of the glass yarns are more cost effective
while no loss in mechanical durability of the cable. Glass yarns also protect the cable core against
rodents and termites.

Conclusions
Fiber optic cables provide unique advantages in an many systems, particularly in secure
and long distance applications. Choosing the proper cable depends upon the number of fibers
required, installation location, topology, and the overall design of the system. Cable constructions are
available for both indoor and outdoor applications to provide a solution for virtually any system.
Color coding provides an easy identification method for multi-fiber cables.

Fiber- A thread-like structure forming part of the muscular, nervous, connective, or


other tissue in the human or animal body.
Refraction-Physics. The change of direction of a ray of light, sound, heat, or the like,
in passing obliquely from one medium into another in which its wave velocity is different.
MRI Also called NMR. Magnetic resonance imaging: a non invasive diagnostic
procedur employing an MR scanner to obtain detailed sectional images of the internal structure of
the body.
Gastroscope- a lighted flexible tubular instrument passed through the mouth for
examining the esophagus,stomach, and duodenum.
Attenuation- the progressive reduction in amplitude of a signal as it travels farther
from the point of orign.
Perpendicular- meeting a given line of surface at right angles.
Velocity- rapidity of motion or operation; swiftless;speed.
StrainThe extent to which a body is distorted when it is subjected to a deforming force,as wh
en under stress. The distortion can involve a change both in shape and insize. All meas
ures of strain are dimensionless (they have no unit of measure).
Yarn- thread made of natural or synthetic fibers and used for knitting and weaving.
Thread- a fine cord of flax, cotton, or other fibrous material spun out to considerable
length, especially when composed of two or more filaments twisted together.

Simonas Berankis RT-3

Microsoft's HoloLens explained: How it


works and why it's different
Has Microsoft suddenly pushed us into the age of "Star Trek" and "Minority Report"? For those
confused about what's actually going on with the company's new head-mounted gadget, here's
the rundown.
Microsoft has a vision for the future, and it involves terms and technology straight out of science
fiction.
But are we actually glimpsing that future? Yes and no.
Microsoft's HoloLens, which the company unveiled at its Redmond, Wash., headquarters on
Wednesday, is a sleek, flashy headset with transparent lenses. You can see the world around you,
but suddenly that world is transformed -- with 3D objects floating in midair, virtual screens on
the wall and your living room covered in virtual characters running amok.
Technology companies have long promised to bring us the future now, reaching ahead 5 or 10
years to try to amaze consumers with the next big breakthrough. Hollywood, on the other hand,
has shown that tech in action (or at least simulations of it).
In "Minority Report," for instance, Tom Cruise's character used sweeping, midair hand gestures
and transparent screens to do police work. Five years later, Apple unveiled the iPhone, and with
it, a touchscreen operated by hand and finger gestures. Microsoft in turn served up its Kinect
gesture-control device, which tracks people's movements through space and feeds the data into
an interface.
Going further, "The Matrix" showed hackers plugging computers into people's brains to transport
them to imaginary cities. And in "Star Trek," computers used energy fields and visual tricks to
create worlds people could touch and feel.
We're not even close to those scenarios yet, but we're taking tiny steps in that direction.
Companies like Facebook, Google and Microsoft are now attempting to move that fiction toward
reality, and the public is beginning to see those visions of tomorrow take form.
So how does the HoloLens measure up against other reality-altering gadgets?
What's a HoloLens, and how does it work?
Microsoft's HoloLens is not actually producing 3D images that everyone can see; this isn't "Star
Trek."

Instead of everyone walking into a room made to reproduce 3D images, Microsoft's goggles
show images only the wearer can see. Everyone else will just think you're wearing goofy-looking
glasses.
Another key thing about HoloLens is what Microsoft is trying to accomplish.

Microsoft envisions the HoloLens as both


a personal and a workplace device. Microsoft

The company is not trying to transport you to a different world, but rather bring the wonders of a
computer directly to the one you're living in. Microsoft is overlaying images and objects onto our
living rooms.
As a HoloLens wearer, you'll still see the real world in front of you. You can walk around and
talk to others without worrying about bumping into walls.
The goggles will track your movements, watch your gaze and transform what you see by blasting
light at your eyes (it doesn't hurt). Because the device tracks where you are, you can use hand
gestures -- right now it's only a midair click by raising and lowering your finger -- to interact
with the 3D images.
There's a whole bunch of other hardware that's designed to help the HoloLens' effects feel
believable. The device has a plethora of sensors to sense your movements in a room and it uses
this information along with layers of colored glass to create images you can interact with or
investigate from different angles. Want to see the back of a virtual bike in the middle of your
kitchen? Just walk to the other side of it.
The goggles also have a camera that looks at the room, so the HoloLens knows where tables,
chairs and other objects are. It then uses that information to project 3D images on top of and
even inside them -- place virtual dynamite on your desk and you might blow a hole to see what's
inside.

With Skype video chatting, HoloLens users can let others see through their eyes to help with tasks and
even doodle right on top of your line of vision. Microsoft

While playing a demonstration based on the popular game Minecraft, I tapped my finger on a
coffee table in the real world. But what I saw was my finger chipping away at its surface. When I
was done, I saw a lava-filled cavern inside.
That's just a gimmick, but Microsoft said it indicates potential. HoloLens, Microsoft said, can
transform businesses and open up new possibilities for how we interact.
I used the HoloLens to video chat with a Microsoft employee who was using Skype on a tablet.
Her task? To help me rewire a light switch. She accessed a camera on the HoloLens to see
through my eyes, then she drew diagrams and arrows where I was looking to show me what tools
to pick up and how to use them.

Imagine how these tricks could be used to train pilots or guide doctors through complex
operations.
Different from the Rift
So how about the Oculus Rift? Created by Oculus VR, a startup Facebook purchased for more
than $2 billion in March 2014, the headset is considered the poster child of the blossoming
virtual reality market.
From a distance, Oculus' headset looks a bit like Microsoft's HoloLens in that it's a device worn
on your head. But that's where the similarities end. Whereas Microsoft wants to help us interact
with the real world in new ways, Oculus wants to immerse us in an entirely new world.
To put it simply, the Rift headset is a screen on your face. But when it's turned on, the images it
produces trick your brain into thinking you've been teleported to a different world, like a starship
out in space, or the the edge of a skyscraper. Oculus could, one day, take a more practical route,
transporting you courtside to a live basketball game or to a sun-soaked beach to relax.
The goal for Oculus is to trick the user into believing they're actually there -- wherever it's
bringing you. That feeling is called "presence," an ambition Microsoft's HoloLens isn't reaching
for.
Enthusiasts say that moment, where your brain is tricked into believing you're actually
somewhere else, is magical.
"I've seen a handful of technology demos in my life that made me feel like I was glimpsing into
the future," wrote venture capitalist Chris Dixon, who helped lead investment firm Andreessen
Horowitz's funding in Oculus VR. "The best ones were: the Apple II, the Macintosh, Netscape,
Google, the iPhone, and -- most recently -- the Oculus Rift."
Oculus isn't alone in its quest. Sony is attempting something similar with its Project Morpheus
headset. Both have outspoken plans to use the technology to transform all manner of industries,
starting with video games. But developers say it's hard to get it right. The images need to be
carefully connected to your physical movements without any delays. When they aren't,
consumers feel a form of motion sickness.
Same difference
Ultimately, these companies are on different roads to the same destination, which is trying to
reimagine how we interact with computers. We're all used to the mouse and the keyboard, and
we're learning to live with the glass screens of smartphones too. So far, each of these devices has
been good enough to convey the information from a book or the scenes of a movie.
But Oculus, Microsoft, Google and others believe in a different, potentially more natural way to
interact with our technology. These companies and the hardware they're creating imagine a world

where hand gestures, 3D images and images superimposed on reality are the next-generation
tools for productivity, communication and everything else we use gadgets and the Internet for.
It sounds like science fiction, but if these devices work the way tech luminaries hope they can,
such dreams may be reality sooner than we think.
Words:

Sleek - Thin and elegant in design:


Running amok - to run about with or as if with a frenzied desire to kill
Gimmick - An innovative or unusual mechanical contrivance; a gadget.
Goofy - Silly; ridiculous:
Plathora - overabundance; excess.
Glimpse - A brief, incomplete view or look.
Convey - To communicate or make known; impart:
Superimpose - To lay or place (something) on or over something else.
Luminaries - An object, such as a celestial body, that gives light.

Marius Boguas RTE-3

Turbocharged Direct Injection


Turbocharged Direct Injection or TDI [1] is a design of turbodiesel engines, which
feature turbocharging and cylinder-direct fuel injection,[1] developed and produced by the Volkswagen
Group.[2] These TDI engines are widely used in all mainstream Volkswagen Group marques of passenger
cars and light commercial vehicles produced by the company[3] (particularly those sold in Europe). They are
also used as marine engines - Volkswagen Marine,[4][5][6] and Volkswagen Industrial Motor[7]applications.
In many countries, TDI is a registered trademark of Volkswagen AG.[2]
The TDI designation has also been used on vehicles powered by Land Rover designed diesel engines.
These are unrelated to VAG engines.
The TDI engine uses direct injection,[1][2] where a fuel injector sprays atomised fuel directly into the
main combustion chamber of each cylinder,[1][2] rather than the pre-combustion chamber prevalent in older
diesels which used indirect injection. The engine also uses forced induction by way of a turbocharger[1][2] to
increase the amount of air which is able to enter the engine cylinders,[2] and most TDI engines also feature
an intercooler to lower the temperature (and therefore increase the density) of the 'charged', or compressed
air from the turbo, thereby increasing the amount of fuel that can be injected and combusted.[1] These, in
combination, allow for greater engine efficiency, and therefore greaterpower outputs[2] (from a more complete
combustion process compared to indirect injection), while also decreasing emissions and providing
more torque[2] than the non-turbo and non-direct injection petrol engined counterpart from VAG.
Similar technology has been used by other automotive companies, but "TDI" specifically refers to
these Volkswagen Group engines. Naturally aspirated direct-injection diesel engines (those without a
turbocharger) made by Volkswagen Group use the Suction Diesel Injection (SDI) label.
Because these engines are relatively low displacement and quite compact they have a low surface area. The
resulting reduced surface area of the direct injection diesel engine reduces heat losses, and thereby
increases engine efficiency, at the expense of slightly increased combustion noise. A direct injection engine
is also easier to start when cold,[8]because of more efficient placing and usage of glowplugs.
Direct injection turbodiesel engines are frequent winners of various prizes in the International Engine of the
YearAwards. In 1999 in particular, six out of twelve categories were won by direct injection engines: three
were Volkswagen, two were BMW, and one Audi.[citation needed] Notably that year, the Volkswagen Group 1.2 TDI
3L beat the Toyota Priusto win "Best Fuel Economy" in its class.[citation needed] The TDI engine has won "Green
Car of the Year award" in the years 2009 (Volkswagen Jetta 2.0 litre common-rail TDI clean diesel) and 2010
(Audi A3 TDI clean diesel) beating other various electric cars.
The first passenger car to be powered by direct injection was the 1986 Fiat Croma 2.0 TD i.d. (Turbo Diesel
iniezione diretta pump was developed by Bosch in accordance to the Fiat's engineers specifications).[citation
needed]

Rover introduced its MDI turbocharged direct injection diesel developed with Perkins, (also known as the
Perkins Prima) in 1988 in the Rover Montego. It was also sold in marine form by Volvo. It used a Bosch VE
injection pump. The engine had been launched in naturally aspirated form for commercial vehicles in 1986.
The first Volkswagen Group TDI engine was the Audi-developed 2.5 litre R5 TDI an inline five-cylinder
engine (R5), introduced in the Audi 100 in 1989[citation needed] and this variant is still used today in Volkswagen
Marine applications. The TDI arrangement has been enhanced through various stages of development by
improving the efficiency of the turbocharger, increasing the pressure at which fuel can be injected, and more

precisely timing when the injection of fuel takes place. There have been a few major 'generations', starting
with what are known as "VE", and "VP" (German: VerteilerPumpe) engines,[citation needed] which use a distributortype injection pump. In 2000, the Pumpe Dse (PD, variously translated "pump nozzle", "unit injector", "pump
injector") TDI engine[1] began to appear in Europe, eventually coming to North America a few years later.
The Pumpe Dse design was a reaction to the development of high-pressure common rail fuel injection
systems by competitors - an attempt by Volkswagen Group to create an in-house technology of comparable
performance that would not require any royalties to be paid.[citation needed] While Pumpe Dse engines had a
significantly higher injection pressure than older engines, they are slightly less refined when compared to the
very latest common rail[citation needed] and, with the original solenoid-operated unit injectors, weren't able to control
injection timing as precisely (a major factor in improving emissions).[citation needed] Some current PD TDI engines
now use piezoelectric unit injectors, allowing far greater control of injection timing and fuel delivery. From the
2009 model year onwards, TDI engines using the common rail (CR) technique, again with piezoelectric
injectors, are now used in various Volkswagen Group models.[1] The CR engines are available in many sizes,
including 1.2, 1.6, 2.0, 2.7, 3.0, 4.2 and 6.0 litres, with outputs from 55 to 368 kW (75 to 500 PS) from these
engines.
A motor racing version of the common rail TDI engine made an impact in March 2006 when it was used in
the Le Mans Prototype (LMP) Audi R10 TDI, and made its debut win in the 12 Hours of
Sebring race.[9][10][11] This victory was followed three months later by another one in the 24 Hours of Le
Mans race, becoming the first diesel-powered car to win these prestigious endurance races.[12][13][14] Fuel
economy was a significant factor, as the car did not have to refuel as often as petrol engined race cars in the
race.[15] The car was fueled with a special synthetic V-Power diesel fromShell.[16] The Audi
R10, R15 and R18 TDIs have won at Le Mans seven times in eight years, from 2006 to 2011, with only the
2009 race being won by Peugeot's 908 HDi FAP, which is also a diesel powered car.
In 2007, SEAT with the Len Mk2 TDI at the Motorsport Arena Oschersleben in Germany became the
first manufacturer to win a round of the World Touring Car Championship (WTCC) series in a diesel
car,[17][18] only a month after announcing it will enter the FIA World Touring Car Championship with the Len
TDI. SEAT's success with the Len TDI was continued, and resulted in winning consecutively 2008 World
Touring Car Championship and 2009 World Touring Car Championship both titles (for drivers as well as for
manufacturers).[19][20]
In 2008, SEAT with the Len Mk2 TDI at Donington Park in England - became the first manufacturer to win
a round of the British Touring Car Championship (BTCC) in a diesel-powered car. Jason Plato won race 1 of
the weekend andDarren Turner won race 3.[21]
The fuels required for TDI engines includes diesel fuel (also known as petrodiesel), or B5, B20, or
B99 biodiesel, depending on emissions equipment, location dependent.
A 2007 Volkswagen Jetta Mk5 with a 1.9 TDI engine and a five-speed manual transmission achieves
5.2 litres per 100 kilometres (54 mpg-imp; 45 mpg-US) on the European combined-cycle test (an US EPA test of
the same vehicle would achieve around 34 MPG), while a six-speed direct-shift gearbox (DSG) automatic
version reaches 5.9 litres per 100 kilometres (48 mpg-imp; 40 mpg-US).[22]
Newer TDI engines, with higher injection pressures, are less forgiving about poor-quality fuel than their
1980s ancestors. Volkswagen Group's warranty does not cover damage due to bad fuel (diesel or bio), and
has in the past recommended that only mixtures up to 5% biodiesel (B5) be used. Volkswagen Group has
recently permitted mixes up to B20, and has recommended B5 be used in place of 100% petroleum-based
diesel because of biodiesel's improved lubricating properties.[23][unreliable source?][citation needed]

In North America, No. 2 diesel fuel is recommended, since it has a higher cetane number than No. 1 fuel,
and has lower viscosity (better ability to flow) than heavier fuel oils. Some owners in North America, where
cetane levels are generally poor (as low as 40), use additives, or premium diesel, to get cetane numbers
closer to the standard levels found in the European market (at least 51) where the engine is designed.
Improved cetane reduces emissions while improving performance, and may increase fuel economy. [citation
needed]

New ultra low-sulphur petroleum-only diesels cause seals to shrink[24] and can cause fuel pump failures

in TDI engines; biodiesel blends are reported to prevent that failure.

Combustion - The process of burning something


combustion chamber - An enclosed space in which combustion takes place, especially in

an engine or furnace.
Intercooler An apparatus for cooling gas between successive compressions, especially in
a superchargedvehicle engine.
Emissions The production and discharge of something, especially gas or radiation:the effects of

lead emission on health


Torque A force that tends to cause rotation:the three-litre engine has lots of torque
Frequent Occurring or done many times at short intervals:frequent changes in policy
Common rail is a direct fuel injection system for petrol and diesel engines.
manual transmission An automotive transmission consisting of a system of interlocking gear wheels

and a lever that enables the driver to shift gears manually.

Evaldas Dudonis RTE-3

DRIVING THE 2016 TOYOTA


MIRAI IS REMARKABLY
UNREMARKABLE (AND THATS A
GOOD THING)
The most remarkable thing about the first hydrogen fuel cell vehicle that you can
buy outright is how unremarkable it is to drive.
But dont get me wrong: Thats not a bad thing.
The 2016 Toyota Mirai represents a paradigm shift. Appropriately, its very name
means future in Japanese.
As a fuel cell vehicle, or FCV, the Mirai eschews century-old gasoline engine
technology in favor of a stack of wafer-thin cells that creates a chemical reaction
when hydrogen passes through them. The result is electricity, which powers a
compact motor to drive the front wheels.
Unlike the emissions a typical car spews from its tailpipe, the only byproduct fuel
cells create is water, which drips out of a small port under the vehicle.
Yet theres no wonderment when starting up and driving the Mirai. It feels like
any other electric car, with a smooth surge of power when depressing the
accelerator from a standstill.
Also like most electric cars, the Mirai feels zippy around town, but once you get
above 45 miles per hour or so, you start to feel like the car is tiring out. Theres
just enough power to keep pace with traffic, but not much more.
Toyota
The Mirai's compressed hydrogen fuel tank takes about five minutes to fill up.

Mashing the throttle to the floor at highway speeds results in slow but steady
acceleration. Weve come to expect this from electric vehicles, regardless of how
they generate their energy.
A Power Mode button on the Mirais center console boosts throttle response
noticeably. An Eco Mode button does just the opposite, curtailing the throttle
to save energy. But if you slam the accelerator to the floor in any of the three
modes normal, power or eco the Mirai will deliver all the acceleration it can
muster.

The interior stays remarkably quiet, except for a distinct, high-pitched whirring
sound during moderate to hard acceleration. The sound which is not
unpleasant is louder than that of the plug-in electric Nissan Leaf, but far
quieter than a gasoline engine.
The Mirai feels spacious enough inside, though it looks a little drab, with black
plastic in different finishes, shiny to dull, on nearly every surface.
The speedometer and other instruments are located high in the center of the
dashboard, like on the Toyota Prius. Controls on the center console for the
navigation, stereo and air conditioner were all easy to use, including the touchsensitive sliders for setting the cabin temperature (sometimes these types of
controls can be finicky, but that was not the case).
The seats in the Mirai are comfortable, both front and rear. But its unfortunate
that U.S. models will get only vinyl, instead of the more attractive cloth fabric
offered in other markets. Toyota execs say Americans think cloth upholstery is
cheap compared to leather, but putting real leather on a hydrogen fuel cell
vehicle would go against its environmentally friendly positioning, so they
compromised and went with faux leather.
Toyota
The 2016 Toyota Mirai's interior is spacious and comfortable. The dashboard looks high-tech.
Black vinyl is the only upholstery offered for the United States.

Seating capacity is limited to four. A center rear seat would have allowed one
more occupant, but Toyota chose to put an armrest there instead. The idea was
to create a luxurious experience for rear-seat occupants, execs said.
The main letdown in driving the Mirai FCV is minor and wont be an issue for
those considering whether to get one. That is, it lacks the sporty feel of the Mirai
prototype, which I tested at Toyotas proving grounds outside Tokyo late last
year. The prototype felt almost like a sports car when driven briskly around the
banked test track. The low center of gravity due to the battery pack under the
floor really made it hug the road.
Also, the steering was sharper and the suspension was tighter on the prototype
than it is on the production Mirai.
Clearly, Toyota prioritized a smooth, comfortable ride instead, and the Mirai
excels at delivering it. Most buyers will appreciate comfort over sportiness
anyway.
The 2016 Toyota Mirai has a range of about 300 miles on a full tank of
compressed hydrogen. Refueling takes less than five minutes and is free for now.

The Mirai is set to go on sale in California by next fall and in a handful of


Northeastern states by 2016. The price is $57,500, or $499 a month for a 36month lease. Federal and state incentives could lower the final cost for buyers to
under $45,000.
Toyota
The center console of the 2016 Toyota Mirai uses touch-sensitive controls.

There are fewer than 15 hydrogen fueling stations in the United States and all
but a couple are in California. Toyota acknowledged that the lack of fueling
stations is one of the factors that limits the pool of potential buyers for vehicles
like the Mirai.
To help overcome that hurdle, the automaker announced plans to partner with a
company called Air Liquide to develop infrastructure across Connecticut,
Massachusetts, New Jersey, New York, and Rhode Island. The goal is to add a
dozen stations in those states by 2016, which is when the Mirai goes on sale in
the Northeast.
The Mirai made its debut at the Los Angeles Auto Show in November and is the
production version of the Toyota FCV concept shown at the 2013 Tokyo Motor
Show.
Though it is expected to sell only in small numbers at first Toyota anticipates
sales of 700 worldwide next year it is seen as a breakthrough car by hydrogen
proponents, in much the same way as the Prius was for hybrid vehicles.
Other fuel cell vehicles have come to market before. For example, Hyundai
started selling a hydrogen-powered version of its Tucson in select California
markets just this past summer. But Hyundai leases its Tucson to pre-selected
individuals.
The 2016 Toyota Mirai is the first FCV to be made available for any general
consumer to buy.
Based on our test drive, those pioneering drivers are likely to be pleased.
Toyota
The 2016 Toyota Mirai has a port in the upper left corner of the trunk, which allows it to
provide power to a house or some other building when connected with the appropriate cable.
Basically, it can turn the car into an emergency power source during a natural disaster, for
example.

Using Less Energy Energy-Efficient


Vehicles: The Hybrid Vehicle
One popular and successful energy-efficient vehicle is the hybrid. It contains both an internal combustion
engine, powered by gasoline, and an electric motor, powered by batteries. Different vehicles use somewhat
different arrangements to switch back and forth between the motor and the engine. But all hybrids do the
same thingalternate between gasoline power and battery power. Hybrids use less gasoline than regular
vehicles and thus produce significantly fewer polluting emissions.

The hybrids electric motor gets its power from rechargeable batteries, but unlike electric vehicles, hybrids
do not need to be plugged in to recharge. The batteries are recharged mostly by the combustion engine.
Most hybrids also have two braking systems: traditional hydraulic brakes and a regenerative braking
system. This means that the energy created by applying the brakes to stop the vehicle goes back to the
batteries rather than being lost as heat from friction.

The most popular hybrids, including the Toyota Prius and the Honda Civic, are small cars. But automobile
makers also have larger hybrid sedans, such as the Toyota Camry, and sport-utility vehicles, including the
Ford Escape. However, most of the larger hybrids are not nearly as energy efficient as the smaller vehicles,
because they contain larger combustion engines to provide more power and because they are larger cars
overall.

Like electric vehicles, hybrids were developed early in automobile history. The first one was built by
Ferdinand Porsche in 1900. This vehicle used an internal combustion engine to power a generator attached
to electric motors in the wheel hubs, as well as a battery to store energy. In the early years of the
automobile industry, a number of other hybrids were developed. But like the electric car, once Ford began
to mass-produce internal combustion engine vehicles, the hybrid disappeared.

Development of the Modern Hybrid

First hybrid vehicle


Photo courtesy of Dr. Ing., h.c.F. Porsche AG

The first hybrid automobile was built by Ferdinand Porsche in 1900, for Viennese coachmaker Lohner. It
had a combustion engine as well as an electric hub motor and could store energy in a battery.

1972 gas and electric hybrid car


Photo CalTech Archives. All rights reserved.

In 1974 engineers Victor Wouk (above, at the EPA test site) and Charlie Rosen built a gasoline and electric
hybrid using a 1972 hybrid Buick Skylark. Although the car met strict emission standards, the engineers lost
their financial backing from the EPA, and development was halted.

Interest in hybrids resumedalong with the resurgence of interest in the electric carin the 1970s. One of
the first modern hybrids was built by American engineers Victor Wouk and Charlie Rosen in 1974. They
converted a 1972 Buick Skylark to a hybrid. This vehicle was designed as part of the Clear Car Incentive
Program from the U.S. Environmental Protection Agency. Funding for the program was discontinued, and
the vehicle never went into production.

For the next ten or so years, there were two approaches taken to developing hybrids. In one, enterprising
individuals figured out how to convert standard vehicles; if they got publicity, they shared plans with other
interested parties. Or, a variety of companies also built unusual concept hybrid vehicles. One example was
a six-wheeled, two-door vehicle built with a motor by Briggs & Stratton, best known for manufacturing
engines for lawn mowers. In the late 1980s German automaker Audi built a car that had an internal
combustion engine drive train for the front wheels and a battery-powered electric motor for the rear
wheels.

Honda Insight hybrid


Photo courtesy of Honda

The Honda Insight hybrid was redesigned in 2010, replacing an older, smaller model.

Six-wheeled hatchback
Photo courtesy of Motor Trend, photographer Frank Markus

This six-wheeled hatchback, built in 1980 by self-taught electric vehicle enthusiast Lou Gyogy, was powered
by a Briggs & Stratton two-cylinder engine and an electric motor.

Toyota Prius
Photo courtesy of Hannu Liivaar/Dreamstime.com

The Toyota Prius has become a very popular vehicle in the United States.

In early 1992 Japanese automaker Toyota announced a commitment to developing low-emission vehicles.
The first of these was the hybrid Prius, which went on sale in Japan in 1997. Rival Japanese automaker
Honda introduced its hybrid, the two-door Insight, in the United States in 1999. Toyota brought the fourdoor Prius to the United States in 2000. Honda phased out the Insight in 2006, but brought back the name
in a much-revamped version in for the 2010 model year. The company introduced a hybrid version of its
popular Civic in 2002. The Japanese hybrids were a success, and other automakers followed them into the
hybrid market. The first American hybrid was a version of the Ford Escape, a small sport-utility vehicle,
introduced in 2004. Since then, even larger hybrids have become available. However, the smaller ones
remain the most economical and least polluting.

Whats Inside the Hybrid?

Series hybrid diagram


Photo courtesy of Advanced Transportation Technology Institute

A series hybrid uses an internal combustion engine to power a generator, which in turn drives the electric
motor. The motor powers the drive train or recharges batteries.

Parallel hybrid diagram


Photo courtesy of Advanced Transportation Technology Institute

A parallel hybrid is designed to let the electric motor and the gasoline engine operate at the same time.

Not all hybrids are equal. Different types of drive trains produce somewhat different results in terms of
energy efficiency. In a series hybrid, an internal combustion engine turns a generator. The generator can
power the electric motor that powers the drive train or it can recharge the batteries.

In a parallel hybrid, the gasoline-fueled internal combustion engine and the battery-powered electric motor
can both power the transmission at the same time.

The parallel hybrid uses a small 4-cylinder engine, increasing fuel economy. However, the small engine does
not provide the level of instant power many drivers usually have in a typical vehicle.

The combination drive train unites the best features of both previous types. Both the internal combustion
engine and the electric motor can power the car. A computer in the car determines which one to use at all
times. The electric motor is the main motor at lower speeds. At higher speeds, if extra power is needed, or
if the batteries need to be recharged, then the internal combustion engine takes over. This makes the most
efficient use of the gasoline.

There are also different levels of hybrid-ness. A full hybrid, such as the Toyota Prius, or the Ford Escape,
runs on the internal combustion engine, the electric motor, or both at any given time. This results in
significantly reduced gasoline consumption and thus significantly reduced emissions. An assist hybrid uses
the electric motor to help the combustion engine. This also reduces gasoline consumption by a notable
amount. Honda uses this system in its Civic hybrid. A mild hybrid uses the electric motor as a power booster
for the combustion engine, but the electric motor can never be used by itself. The Saturn Vue hybrid is a
mild hybrid, with slightly reduced gasoline consumption.

Volvo plug-in hybrid vehicle


Photo courtesy of Volvo Cars of North America

Swedish auto maker Volvo is among the companies developing plug-in hybrids.

Batteries are an important component of the hybrid. Current models use nickel-metal hydride batteries,
which are significantly more expensive than typical automobile batteries. Because of this, automakers have
provided extended warranties of 128,750 to 160,95 km (80,000 to 100,000 mi) on the batteries.

Still in development are plug-in hybrid electric vehicles (PHEVs). Automakers around the world are working
on hybrids that require very little gasoline. They use larger batteries and thus can go farther on the electric
motor. They recharge with electricity, rather than from the combustion engine. So far, the prototypes are
still being tested.

Words:
Paradigm shift - A fundamental change in approach or underlying assumptions
Unremarkable Not particularly interesting or surprising
Internal combustion An engine which generates motive power by the burning of petrol, oil, or other fuel
with air inside the engine, the hot gases produced being used to drive a piston or do other work as they
expand.
Regenerative braking system A method of braking in which energy is extracted from the parts braked, to
be stored and reused.
Energy efficient ratio The ratio of a heating or cooling systems output, per hour, in British thermal units
to the input in watts, used to measure the systems efficiency.
Eschews Deliberately avoid using
Spews- Expel large quantities of (something) rapidly and forcibly
Finicky- fussy about their needs or requirements
Upholstery- Soft, padded textile covering that is fixed to furniture such as armchairs and sofas
Briskly Active and energetic, Showing a wish to deal with things quickly

Dovil Galvanauskait RTE-3


Five future transportation technologies that will
actually happen
Just when you thought your commute was getting too routine.
Over the next decade, the idea of getting to work on time, heading out to the hinterlands for your
family vacation or even going to the game will become much easier. Cars will drive themselves
along pre-determined routes. Trains will use new magnetic rail systems. And an amazing new
hyperloop train will speed along at 800 miles per hour.
The best part? These innovations are not just spinning their wheels. They are set to debut within
the next 10 years or have already started transporting us.
New technologies have the potential to make our roads and transit systems safer, greener and
more efficient, Gregory Winfree, the administrator of the Department of Transportations
Research and Innovative Technology Administration, told FoxNews.com. We are working hard
to ensure that these technologies can be integrated safely into our existing system.
We will need to do something, said Thilo Koslowski, the lead automotive analyst at research
firm Gartner, who studies next-generation transportation, given that we will continue to see
more vehicles on the road but wont be able to grow infrastructure at the same time. We have to
get smarter about using that infrastructure and/or innovate in passenger vehicles and mobility.

1. Hyperloop
One of the most exciting innovations in transportation has to be the Hyperloop train. Rising on
nearly airless tubes at 800 mph, the train will transport you from LA to San Francisco in just 30
minutes. Elon Musk announced a design scheme back in August, but FoxNews.com has
learned the concept is more than a pipe dream -- it is now a real technology in development.
Were moving toward conceptual design, said Dr. Patricia Galloway, the co-leader
of Hyperloop Transportation Technologies Inc., hinting at more than just a rough design
sketch and an actual concept, something that is concrete and verifiable in the near future.
On paper, hyperloop is both cheaper and quieter, and it is potentially much faster, than a
maglev train, said Rob Enderle, an analyst with Enderle Group who studies Silicon Valley
technology.

2. Maglev trains

Magnetic levitation trains are not just a lofty dream held over from the 50s. They are already in
operation in Shanghai and Japan. South Korea is building a maglev train that will operate within
the Incheon Airport, and China reportedly has a second maglev train in development.
A magnetic force lifts and propels the train using a minimal amount of energy compared
to diesel-powered or electric-powered trains. The trains whisk passengers along at up to
310 miles per hour. A planned maglev train will transport passengers over 200 miles
between Nagoya and Tokyo in just 40 minutes, helping to free congested roads, reduce
air pollution, and reduce accidents.
Of course, the main issue with maglev trains is the high cost of development. Because of
the fast speeds, the trains have to be routed directly between destinations, said Enderle.

3. Autonomous vehicles
A robotic driver can think faster and smarter than a human driver -- and look in all directions at
once. Thats the idea behind autonomous driving, where you take your hands off the wheel and
let the car do the driving for you. Ford has already announced a project called Traffic Jam Assist
and Cadillac is working on something called Super Cruise that lets the car take over.
Still, Google is leading the charge. It now uses a fleet of about 24 Lexus RX450h vehicles that
have logged a total of about 500,000 miles on California roads. The cars can look for exit
ramps, detect buildings, stop suddenly for other cars and change speeds as needed.
Enderle says there are many prototypes already on the road, especially those being tested by
Google in San Francisco. Nevada has already created laws that make them legal to use in
cities, including Las Vegas. In fact, Enderle says autonomous driving could appear within two
years if it werent for some nagging legal issues (such as how to insure them) and public safety
concerns.

4. Smart cars
One way to solve transportation problems in major cities is to make the cars much smaller and
smarter. So-called smart cars have been around for many years. But there are signs of
progress. Many automakers, including BMW and Nissan. already offer compact electric cars.
The BMW i3, already available in Europe, can brake automatically when you take your foot off
the accelerator, consumes no gasoline and operates for 80-100 miles per charge.
I do believe that there is a growing opportunity for new types of vehicles specifically designed
for urban areas, Koslowski said, adding that these cars need more of a wow factor and will
have to become part of an urban areas overall plan for better transportation in a city, not just
showy small cars for individual drivers.

5. Urban transport pods


What if you could jump into a moving pod and speed away to another part of the city? That is
what the Milton Keynes neighborhood about 45 miles northwest of London is planning. The
pods seat one person and move on their own over a pre-described route.

The idea is that the human operator interacts with the pod using a touchscreen in the
windshield. You swipe to select a destination, and you can read the daily news, check your email or even play a video game during the trip. There will be a built-in wireless hotspot to
connect your gadgets. The pod operates on its own, showing its current route.
(Similar pods are already being used in Masdar City in Abu Dhabi and at the London Heathrow
airport, but both are used in tightly controlled areas.)
Jon Beasley, the program director at Transport Systems Catapult who is charged with
developing the technology, told FoxNews.com the project is an urban laboratory where they
can test not just the autonomous pods but also how they work in a real public setting.
We want to gain familiarity with future transport solutions in one area, to make it easier for
industrial collaborators to come together and work together, he says.

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 1 Issue 5 pp 165-170

August 2012

www.ijsret.org

ISSN 2278 - 0882

Key Concepts and Network Architecture for 5G Mobile Technology


Sapana Singh1, Pratap Singh2
1

Information Technonlogy, 2Electronics and Communication


1
IIMT Engineering College, Meerut, 2RGEC, Meerut
1
Sapanasingh8407@gmail.com, 2 pratapsinghbiet2000@yahoo.co.in

ABSTRACT
5G technologies will change the way most highbandwidth users access their phones. With 5G pushed
over a VOIP-enabled device, people will experience a
level of call volume and data transmission never
experienced before.5G technology is offering the
services in Product Engineering, Documentation,
supporting electronic transactions (e-Payments, etransactions) etc. As the customer becomes more and
more aware of the mobile phone technology, he or she
will look for a decent package all together, including all
the advanced features a cellular phone can have. Hence
the search for new technology is always the main motive
of the leading cell phone giants to out innovate their
competitors. The ultimate goal of 5G is to design a real
wireless world that is free from obstacles of the earlier
generations. This requires an integration of networks.
This paper represents, introduction to 5G technologies,
Key concepts of 5G, Features of 5G networks
technology, applications, hardware and software for 5G
technologies and network architecture for 5G wireless
technologies and last section conclude the paper.
Keywords5G, wwww, UWB, DAWN, IP, Wi-Fi

I.

INTRODUCTION

5G Technology stands for 5th Generation Mobile


technology. 5G technology has changed the means to
use cell phones within very high bandwidth. 5G is a
packet switched wireless system with wide area
coverage and high throughput. 5G wireless uses OFDM
and millimeter wireless that enables data rate of 20 mbps
and frequency band of 2-8 GHz. 5G is going to be a
packed based network . The 5G communication system
is envisioned as the real wireless network, capable of
supporting wireless World Wide Web (wwww) applications in 2010 to 2015 time frame.
There are two views of 5G systems: evolutionary and
revolutionary. In the evolutionary view the 5G (or
beyond 4G) systems will be capable of supporting
wwww allowing a highly flexible network such as a

Dynamic Adhoc Wireless Network (DAWN). In this


view advanced technologies including intelligent
antenna and flexible modulation are keys to optimize the
adhoc wireless networks. In revolutionary view 5G
systems should be an intelligent technology capable of
interconnecting the entire world without limits. An
example application could be a robot with built-in
wireless communication with artificial intelligence. User
never experienced ever before such a high value
technology. The 5G technologies include all type of
advanced features which makes 5G technology most
powerful and in huge demand in near future. Amazing
isnt it such a huge collection of technology being
integrated into a small device. The 5G technology
provides the mobile phone users more features and
efficiency than the 1000 lunar module. A user of mobile
phone can easily hook their 5G technology gadget with
laptops or tablets to acquire broadband internet
connectivity. Up till now following features of the 5G
technology have come to surface- High resolution is
offered by 5G for extreme mobile users, it also offers bidirectional huge bandwidth.- 5G technologys excellent
quality service is based on Policy in order to evade
errors.- It provides transporter class type gateway that
has unequalled steadiness.- The 5G technologys billing
interface is highly advanced making it efficient and
appealing.- It offers huge quantity of broadcasting data,
which is in Giga Bytes, sustaining more than 60,000
connections.- This technology also provides remote
diagnostic feature.- Provides up to 25 megabytes per
second connectivity. Also it supports the private virtual
networks.

II.

KEY CONCEPTS OF 5G

The key concepts discussing 5G and beyond 4G wireless


communications are:
1) Real wireless world with no more limitation with
access and zone issues.
2) Wearable devices with AI capabilities.
3) Internet protocol version 6(IPv6), where a visiting
care-of mobile IP address is assigned according to
location and connected network.

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 1 Issue 5 pp 165-170

August 2012

www.ijsret.org

4) One unified global standard.


5) Pervasive networks providing ubiquitous computing:
The user can simultaneously be connected to several
wireless access technologies and seamlessly move
between them These access technologies can be a 2.5G,
3G, 4G or 5G mobile networks, Wi-Fi, WPAN or any
other future access technology. In 5G, the concept may
be further developed into multiple concurrent data
transfer paths.
6) Cognitive radio technology, also known as smartradio: allowing different radio technologies to share the
same spectrum efficiently by adaptively finding unused
spectrum and adapting the transmission scheme to the
requirements of the technologies currently sharing the
spectrum. This dynamic radio resource management is
achieved in a distributed fashion, and relies on software
defined radio.
7) High altitude stratospheric platform station (HAPS)
systems. The radio interface of 5G communication
systems is suggested in a Korean research and
development program to be based on beam division
multiple access (BDMA) and group cooperative relay
techniques.

3) 5G technology also providing subscriber supervision


tools for fast action.
4) The high quality services of 5G technology based on
Policy to avoid error.
5) 5G technology is providing large broadcasting of data
in Gigabit which supporting almost 65,000 connections.
6) 5G technology offer transporter class gateway with
unparalleled consistency.
7) The traffic statistics by 5G technology makes it more
accurate.
8) Through remote management offered by 5G
technology a user can get better and fast solution.
9) The remote diagnostics also a great feature of 5G
technology.
10) The 5G technology is providing up to 25 Mbps
connectivity speed.
11) The 5G technology also support virtual private
network.
12) The new 5G technology will take all delivery service
out of business prospect
13) The uploading and downloading speed of 5G
technology touching the peak.
14) The 5G technology network offering enhanced and
available connectivity just about the world

IV.

FIG.1. 5G Mobile Phone Concept

III.

FEATURES OF 5G NETWORKS
TECHNOLOGY

1) 5G technology offer high resolution for crazy cell


phone user and bi- directional large bandwidth shaping.
2) The advanced billing interfaces of 5G technology
makes it more attractive and effective.

ISSN 2278 - 0882

APPLICATIONS

How could be it?


1) If you can able to feel yours kid stroke when she/he is
in her mothers wombs.
2) If you can able to charge your mobile using your own
heart beat.
3) If you can able to perceive your grandmother sugar
level with your mobile.
4) If you can able to know the exact time of your child
birth that too In Nano seconds.
5) If your mobile rings according to your mood.
6) If you can Vote from your mobile.
7) If you can get an alert from your mobile when some
once opens your intelligent car.
8) If you can able to view your residence in your mobile
when someone enters.
9) If you can able to locate your child when she/he is
unfortunately missed.
10) If you can able to pay all your bills in a single
payment with your mobile.
11) If you can able to sense Tsunami/earthquake before
it occurs.
12) If you can able to visualize lively all planets and
Universe.

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 1 Issue 5 pp 165-170

August 2012

www.ijsret.org

13) If you can able to navigate a Train for which you are
waiting.
14) If you can get the share value lively.
15) If you can lock your Laptop, car, Bike using your
mobile when you forgot to do so.
16) If youre mobile can share your work load.
17) If youre mobile can identify the best server.
18) If youre mobile can perform Radio resource
management.
19) If your mobile can intimate you before the call
drops.
20) If your mobile phone get cleaned by its own.
21) If you can able to fold your mobile as per your
desire.
22) If you can able to expand your coverage using your
mobile phones.
23) If you can able identify your stolen mobile with
nanoseconds.
24) If you can able to access your office desktop by
being at your bedroom.
.

V.

HARDWARE AND SOFTWARE

A. 5G HARDWARE
1) UWB Networks: higher bandwidth at low energy
levels. This short-range radio technology is ideal for
wireless personal area networks (WPANs). UWB
complements existing longer range radio technologies
such as Wi-Fi,* WiMAX, and cellular wide area
communications that bring in data and
communications from the outside world. UWB provides
the needed cost-effective, power-efficient, high
bandwidth solution for relaying data from host devices
to devices in the immediate area (up to 10 meters or 30
feet).
2) Bandwidth: 4000 megabits per second, which is 400
times faster than todays wireless networks.
3) Smart antennasa. Switched Beam Antennas: Switched Beam Antennas
support radio positioning via Angle of Arrival (AOA)
information collected from nearby devices.
b. Adaptive Array Antennas: The use of adaptive antenna
arrays is one area that shows promise for improving
capacity of wireless systems and providing improved
safety through position location capabilities. These
arrays can be used for interference rejection through
spatial _altering, position location through direction
_ending measurements, and developing improved
channel models through angle of arrival channel
sounding measurements.

ISSN 2278 - 0882

4) Multiplexing: CDMA (Code Division Multiple Access)


CDMA employs analog-to-digital conversion (ADC) in
combination with spread spectrum technology. Audio
input is first digitized into binary elements. The
frequency of the transmitted signal is then made to vary
according to a defined pattern (code), so it can be
intercepted only by a receiver whose frequency response
is programmed with the same code, so it follows exactly
along with the transmitter frequency. There are trillions
of possible frequency-sequencing codes, which enhance
privacy and makes cloning difficult.
B. 5G SOFTWARE
1) 5G will be single unified standard of different wireless
networks, including wireless technologies (e.g. IEEE
802.11), LAN/WAN/ PAN and WWWW, unified IP and
seamless combination of broad band.
2) Software Defined Radio, Packet layer, implementation
of packets, encryption, flexibility etc.

VI.

CONCEPTS FOR 5G MOBILE


NETWORKS

The 5G terminals will have software defined radios and


modulation schemes as well as new error-control
schemes that can be downloaded from the Internet. The
development is seen towards the user terminals as a
focus of the 5G mobile networks. The terminals will
have access to different wireless technologies at the
same time and the terminal should be able to combine
different flows from different technologies. The vertical
handovers should be avoided, because they are not
feasible in a case when there are many technologies and
many operators and service providers. In 5G, each
network will be responsible for handling user-mobility,
while the terminal will make the final choice among
different wireless/mobile access network providers for a
given service. Such choice will be based on open
intelligent middleware in the mobile phone.

VII.

5G MOBILE NETWORK
ARCHITECTURE

Below figure shows the system model that proposes


design of network architecture for 5G mobile systems,
which is all-IP based model for wireless and mobile
networks interoperability. The system consists of a user
terminal (which has a crucial role in the new
architecture) and a number of independent, autonomous
radio access technologies. Within each of the terminals,
each of the radio access technologies is seen as the IP
link to the outside Internet world. However, there should

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 1 Issue 5 pp 165-170

August 2012

www.ijsret.org

be different radio interface for each Radio Access


Technology (RAT) in the mobile terminal. For an
example, if we want to have access to four different
RATs, we need to have four different accesses - specific
interfaces in the mobile terminal, and to have all of them
active at the same time, with aim to have this
architecture to be functional. Applications and servers
somewhere on the Internet. Routing of packets should be
carried out in accordance with established policies of the
user.

Fig. 2
Application connections are realized between clients and
servers in the Internet via sockets. Internet sockets are
endpoints for data communication flows. Each socket of
the web is a unified and unique combination of local IP
address and appropriate local transport communications
port, target IP address and target appropriate
communication port, and type of transport protocol.
Considering that, the establishment of communication
from end to end between the client and server using the

ISSN 2278 - 0882

Internet protocol is necessary to raise the appropriate


Internet socket uniquely determined by the application
of the client and the server. This means that in case of
interoperability between heterogeneous networks and for
the vertical handover between the respective radio
technologies, the local IP address and destination IP
address should be fixed and unchanged. Fixing of these
two parameters should ensure handover transparency to
the Internet connection end-to-end, when there is a
mobile user at least on one end of such connection. In
order to preserve the proper layout of the packets and to
reduce or prevent packets losses, routing to the target
destination and vice versa should be uniquely and using
the same path. Each radio access technology that is
available to the user in achieving connectivity with the
relevant radio access is presented with appropriate IP
interface. Each IP interface in the terminal is
characterized by its IP address and net mask and
parameters associated with the routing of IP packets
across the network. In regular inter-system handover the
change of access technology (i.e., vertical handover)
would mean changing the local IP address. Then, change
of any of the parameters of the socket means and change
of the socket, that is, closing the socket and opening a
new one. This means, ending the connection and starting
e new one. This approach is not- flexible, and it is based
on todays Internet communication. In order to solve this
deficiency we propose a new level that will take care of
the abstraction levels of network access technologies to
higher layers of the protocol stack. This layer is crucial
in the new architecture. To enable the functions of the
applied transparency and control or direct routing of
packets through the most appropriate radio access
technology, in the proposed architecture we introduce a
control system in the functional architecture of the
networks, which works in complete coordination with
the user terminal and provides a network abstraction
functions and routing of packets based on defined
policies. At the same time this control system is an
essential element through which it can determine the
quality of service for each transmission technology. He
is on the Internet side of the proposed architecture, and
as such represents an ideal system to test the qualitative
characteristics of the access technologies, as well as to
obtain a realistic picture regarding the quality that can be
expected from applications of the user towards a given
server in Internet (or peer). Protocol setup of the new
levels within the existing protocol stack, which form the
proposed architecture, is presented in Figure (Protocol
Layout for the Elements of the Proposed Architecture).

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 1 Issue 5 pp 165-170

August 2012

www.ijsret.org

The network abstraction level would be provided by


creating IP tunnels over IP interfaces obtained by
connection to the terminal via the access technologies
available to the terminal (i.e., mobile user). In fact, the
tunnels would be established between the user terminal
and control system named here as Policy Router, which
performs routing based on given policies. In this way the
client side will create an appropriate number of tunnels
connected to the number of radio access technologies,
and the client will only set a local IP address which will
be formed with sockets Internet communication of client
applications with Internet servers. The way IP packets
are routed through tunnels, or choosing the right tunnel,
would be served by policies whose rules will be
exchanged via the virtual network layer protocol. This
way we achieve the required abstraction of the network
to the client applications at the mobile terminal. The
process of establishing a tunnel to the Policy Router, for
routing based on the policies, are carried out
immediately after the establishment of IP connectivity
across the radio access technology, and it is initiated
from the mobile terminal Virtual Network-level
Protocol. Establishing tunnel connections as well as
maintaining them represents basic functionality of the
virtual network level (or network level of abstraction).

VIII.

PROGNOSIS

If a 5G family of standards were to be implemented, it


would likely be around the year 2020, according to some
sources. A new mobile generation has appeared every
10th year since the first 1G system (NMT) was
introduced in 1981, including the 2G (GSM) system that
started to roll out in 1992, and 3G (W-CDMA/FOMA),
which appeared in 2001. The development of the 2G
(GSM) and 3G (IMT-2000 and UMTS) standards took
about 10 years from the official start of the R&D
projects, and development of 4G systems started in 2001
or 2002. However, still no official 5G development
projects have currently been launched.
From users point of view, previous mobile generations
have implied substantial increase in peak bit rate (i.e.
physical layer net bit rates for short-distance
communication). However, no source suggests 5G peak
download and upload rates of more than the 1 Gbps to
be offered by ITU-R's definition of 4G systems.[2] If 5G
appears, and reflects these prognoses, the major
difference from a user point of view between 4G and 5G
techniques must be something else than increased

ISSN 2278 - 0882

maximum throughput; for example lower battery


consumption, lower outage probability (better coverage),
high bit rates in larger portions of the coverage area,
cheaper or no traffic fees due to low infrastructure
deployment costs, or higher aggregate capacity for many
simultaneous users (i.e. higher system level spectral
efficiency).
IX.
CONCLUSION
In this paper we have proposed 5G mobile phone
concept and architecture which is the main contribution
of the paper. The 5G mobile phone is designed as an
open platform on different layers, from physical layer up
to the application. Currently, the ongoing work is on the
modules that shall provide the best QoS and lowest cost
for a given service using one or more than one wireless
technology at the same time from the 5G mobile phone.
A new revolution of 5G technology is about to begin
because 5G technology going to give tough completion
to normal computer and laptops whose marketplace
value will be effected. There are lots of improvements
from 1G, 2G, 3G, and 4G to 5G in the world of
telecommunications. The new coming 5G technology is
available in the market in affordable rates, high peak
future and much reliability than its preceding
technologies.
X.

FUTURE ENHANCEMENT

5G network technology will open a new era in mobile


communication technology. The 5G moble phones will
have access to different wireless technologies at the
same time and the terminal should be able to combine
different flows from different technologies. 5G
technology offer high resolution for crazy cell phone
user. We can watch TV channels at HD clarity in our
mobile phones without any interruption. The 5G mobile
phones will be a tablet PC. Many mobile embedded
technologies will evolve.

REFERENCES
[1] Shakil Akhtar Evolution of technologies,
Standards and Deployment of 2G-5G Networks,
Clayton State University, USA-2009.
[2] Santhi, K. R. & Srivastava, V. K. & SenthilKumaran,
G. (Oct. 2003). Goals of True Broadbands Wireless
Next Wave (4G-5G). Retrieved June 11th, 2005, from
the IEEExplore Database from Wallance Library.

IJSRET @ 2012

International Journal of Scientific Research Engineering & Technology (IJSRET)


Volume 1 Issue 5 pp 165-170

August 2012

www.ijsret.org

[3] Adachi, F. (Oct. 2002). Evolution Towards


Broadband Wireless Systems. Retrieved June 11th,
2005, from the IEEExplore Database fromWallance
Library.
[4] Yuh-Min Tseng', Chou-Chen Yang' and Jiann-Haur
Su, An Efficient Authentication Protocol for Integrating
WLAN and Cellular Networks''. Project supported by
National Science Council, under contract no. NSC92-22
13-E-018-014.
[5] Xichun Li, Abdullah Gani, Lina Yang, Omar
Zakaria, Badrul Jumaat, Mix-Bandwidth Data Path
Design for 5G Real Wireless World. The Proceeding of
WSEAS 13th International Conferences on Multimedia
and Communication, Crete Island,Greece, 21-23,July
2008. pp.216-221.
[6] Toni Janevski , 5G Mobile Phone Concept ,
Consumer
Communications
and
Networking
Conference, 2009 6th IEEE.
[7] Aleksandar Tudzarov and Toni Janevski Functional
Architecture for 5g Mobile Network International
Journal of Advanced Science and Technology Vol. 32,
July, 2011
[8] Bria, F. Gessler, O. Queseth, R. Stridth, M.
Unbehaun, J.Wu, J.Zendler, 4-th Generation Wireless
Infrastructures:
Scenarios
and
Research
Challenges, IEEE Personal Communications, Vol. 8,
[9] Willie W. Lu, An Open Baseband Processing
Architecture for Future Mobile Terminals Design,
IEEE Wireless Communications, April 2008
[10] IEEE, Handover Schemes in Satellite Networks:
State-Of-The-Art and Future Research Directions,
NASA, 2006, Volume 8(4).

IJSRET @ 2012

ISSN 2278 - 0882

Artras Gulbinas RTE-3

Enter the matrix


Audis latest LED technology enables
control of individual diodes to deliver the
best illumination under all conditions
When Audi introduced full LED headlights on its R8 sports car in
2008, it was an industry first. They were not only brighter than
halogen and xenon lights, but also more energy efficient. Now the
technology has cascaded down and is used in vehicles in the Bsegment.
Audis engineers are continuing to innovate and have introduced
matrix LED headlights to the A8 luxury sedan developed in
conjunction with Tier One Hella. These differ from traditional light
sources by splitting the high beam into five reflectors, each
containing five LEDs that are software controlled. This removes the
need for a mechanical beam deflector.
Instead, the technology dims sections of the 25 LEDs depending on
the movement of oncoming traffic. That means the high beam can
remain on, keeping the road ahead lit without dazzling drivers in
other vehicles.
Uwe Pseler, Audis development engineer for light and visibility,
says: The matrix LED system means we have greater control over
the light distribution and can put light between vehicles. He also
says that matrix LEDs are closer to Audis definition of the best light
source: If you have light everywhere and dont have to switch them
off then you have the perfect light.

Linked to the A8s forward-facing camera, which can scan up to


300m ahead of the vehicle, single LEDs can be switched off within
200ms. Times for LEDs to switch back on once traffic has moved
past depend on the drive select mode chosen sport or comfort for
example and range between 900ms and 400ms.
Light quality is the same as standard LED headlights, with a colour
temperature of 5,500K xenon systems are only 3,000K. Matrix is
also slightly more efficient because the lights are switched off when
not required, saving power. The light beam angle overall is similar to
a conventional halogen, xenon or LED high beam at +/-25, but
individual LEDs differ between 2 in the middle area and up to 8 in
the outer area.
The matrix lights were complex to integrate. Input was required from
far more engineers than on previous lighting systems. There were
a lot of people involved, software engineers, camera people, lighting
designers and test engineers, because its not only about lighting
but safety too, says Pseler.
While illuminating the route is important, the system can also
highlight pedestrians and large animals at the side of the
road. Rather than using data from the camera in the windshield,
information is taken from the A8s night-vision camera a farinfrared system supplied by Autoliv located in the grille that
detects heat emitted by objects.
Stopping the headlight cluster from fogging is important to maintain
the functionality. Pseler and his colleagues have integrated active
as well as passive ventilation, as used in standard LED headlights.
We have a fan inside, so it circulates the hot air, defogging the lens
as quickly as possible. And when driving, new air is directed inside,
he says.

Pseler says that packaging 25 LEDs into the headlight and


integrating all the functions were the greatest technical challenges,
but adds that legislation is still a stumbling block in some markets.
It isnt possible to use the system in North America at the moment,
he says. The laws definition of a headlight means it needs to be a
single light source, and we have 25. But hes confident that it will
soon be adopted. Discussions with the authorities have already
begun.
Its a safety function, and when they see safer driving scenarios itll
be difficult for them to say they arent going to allow it, he says.
Audis engineers continue to innovate. The firm is developing laserdiode technology for series production a technology that BMW will
introduce in its i8 plug-in hybrid sports car this year. But beyond
that, the next stage in front lighting could come from organic lightemitting diodes, which offer further advantages.
Pseler says: Its extremely slim so you can package it in different
ways and its also a very uniform light. But development is difficult
because, unlike the light bulb in your home, it has to continue
functioning in extreme conditions. If its -25C outside it still has to
work.
Laser technology is just at the beginning of the development cycle
but we are fascinated by it. For an optical engineer, the smaller the
light source, the easier you can project it in a dedicated direction,
and that was the idea behind laser diodes. We could create a base
light with high intensity when travelling at speed.
Its just another light source module within the matrix headlight but,
whereas a normal high beam has a range of 250m, with this we can
reach 500m.
We worked with Osram, and itll be used first in the R18 Le Mans
race car in June, but we developed it for passenger vehicles. A

race-car application is interesting for us because its a reduced


lifetime test, and if it works there itll work in series production
vehicles.
There are differences, though. The R18 uses a passive cooling
system because the cars are continuously driving so theres enough
air movement to cool it. In a passenger vehicle it will be combined
with the LED module, which has a fan.
Its unlikely that laser diodes would be used as the sole light source
at the moment because theyre too weak and a laser is not as
efficient as an LED. Itll take 10 years before they can compete
there. It is scalable, all semiconductors are, but it needs to become
cheaper. That will only happen if volumes increase.
The interesting question is how quickly will it be adopted. I cant
say that in 10 years every vehicle will have lasers maybe it will
take 20 years. Using it in other applications will help. We are
thinking about using laser technology in the interior because to get
the beam into a light guide is quite easy, so it makes it an
interesting application. At next years CES show you will see a car
with it.
Conjunction - The action or an instance of two or more events or things occurring at the same point in
time or space:
Cascade - Pass (something) on to a succession of others:
Distribution - The way in which something is shared out among a group or spread over an area:
Conventional - Based on or in accordance with what is generally done or believed:
Legislation - The process of making or enacting laws:
Stumbling - To proceed unsteadily or falteringly; flounder
Sole - one and only.
Scalable - Able to be changed in size or scale.
Volumes - An amount or quantity of something, especially when great:

Vytenio Karaliaus Rt-3


The release of the Samsung Galaxy Gear smart watch in September 2013 has taken excitement surrounding
wearable technology to fever pitch in the consumer electronics market. The watch is designed to interact with
Samsung's other smartphone and tablet products, and aims to provide a more convenient way of sending
messages, taking pictures and checking emails.
Other firms are keen to capitalise on this apparent opportunity to make consumers' lifestyles more convenient
with a new range of wearable devices. Sony and Apple are among the other firms rumoured to be planning
smart watch products.
However, to what extent can these smart watches be considered truly 'wearable'? Many of the products
currently coming to the market are simply trying to offer smartphone capabilities in a much more limited form
factor. And, as the display for text messages, emails and other notifications cannot wrap fully around the wrist,
what can actually be delivered by the smaller, rigid screens available is limited. Therefore, current offerings
are not giving a true picture of the potential of this emerging market, and are not doing the term 'wearable'
justice.
From watches to clothing
The wearable electronics revolution logically progresses to integration directly into the clothing we wear every
day, and clothing certainly plays a part in the future of wearable electronics. For instance, gowns, vests or smart
patches for medical patient monitoring of vital signs; or fashion, where integrated electronics could
communicate, help the wearer stand out, or allow for dynamic, changing styles. All of the above is being
explored today by technology start-ups and multinational businesses alike.
There are already a number of companies moving these electronics directly into clothing. The world of fashion
is taking a lead in experimenting and generating publicity in the process. Smart fashion has caught the eye of
mainstream media much more over the last year. Pop group the Black Eyed Peas recently completed a world
tour in which lead singer Fergie wore a dress covered in OLED lighting panels, supplied by Philips.
Singer Katy Perry is another to sport electronic clothing, wearing a colour-changing LED dress to an awards
ceremony. The dress was designed by CuteCircuit, a UK company established to commercialise 'smart textile'
concepts. CuteCircuit has been designing dresses featuring wearable technology for a number of years. The
company's best-known designs include a dress studded with LEDs that allow it to change colour and reflect
the mood of the wearer; and a 'Twitter dress', which links to a mobile device and is able to display status updates
from social media sites, using integrated LEDs woven into the fabric.
Wearables for wellbeing
Health and wellbeing applications in particular lend themselves to electronics in garments. There is already a
growing market for fitness-related devices, such as wrist-worn monitors of biometric signals like the Fitbit.
These devices allow the wearer to track activity such as steps taken and the restfulness of sleep, building
towards an overall picture of their health. With consumers already showing an interest in such technology,
more sophisticated versions, integrated directly and inconspicuously in clothing or accessories, could open the
market further.
Big-name firms see these opportunities in wearables. Adidas was one of the first companies to make a move
into wearables, acquiring electronics firm Textronics in 2008.The sportswear firm then launched the miCoach
system: a clip-on control unit that collects data and provides audible coaching, with the option to couple with
a complementary range of sports bras and vests to monitor heart rate as part of the data collection.
Many companies are looking to establish the use of wellbeing monitoring wearables in athletics first. Athletics
provides a scenario where low-volume, premium devices will gain the exposure needed to capture the attention
of High Street sportswear manufacturers, leading to higher volume markets.
For example, Massachusetts-based start-up mc10 was formed in 2008 to commercialise conformable
electronics technology. The company's approach takes high-performance semiconductors and integrates them
into elastic substrates, like silicones and plastics, linked up by proprietary interconnect and packaging
technology.
The US start-up has also been working with sports clothing manufacturer Reebok, to develop a helmet that can
be worn in sporting events. The device detects blows to the head and feeds back to the sidelines regarding
impact, which coaches and medical staff can then use to gauge whether a substitution or treatment is necessary.
Elyse Kabinoff, marketing and communications manager at mc10, comments: 'A heavy impact to the head can
cause concussion, if not more serious damage. mc10's technology will allow team members to see whether an
impact has occurred and how heavy it was, and help them decide whether a player should be rested, and
checked. The helmet can be used in all sports.'

Developments in athletics provide a good indication of the opportunities for wellbeing-related wearables and,
with big-name sportswear brands involved, future commercial impact could be significant.
With access to conformal, flexible electronics, wellbeing applications could be more readily achieved. Clip-on
and rigid devices could be replaced by wraparound, discreet wristbands taking the trend for Fitbit-style devices
further. Product developers would have some of the core components needed to produce a range of sportswear
with seamlessly integrated sensors. Entire ranges of sportswear could gather a host of complementary data,
synced in one easy-to-use online interface.
The applications for such technology in the medical field are also incredibly diverse. Spanish firm Nuubo has
trialled its wearable electronic vests in hospitals, to provide comprehensive and up-to-date information on
patients. The Madrid-based company's shirts have been trialled for European and US market certification. The
Nuubo system centres on medical quality electrocardiogram, heart rate and activity monitors. Beyond medical
facilities, such sensing clothing could combine monitors of a patient's health with GPS to locate those
recovering at home and alert healthcare providers in case of emergency.
The US army Medical Research and Material Command Office has led a research project into a wear-and-forget
sensing system woven into soldier's underwear. Gel-free sensors form an electronic network within the fabric
to monitor heart rate, respiration, activity, temperature and posture, and relay it to a central system. The
technology would allow commanders to identify casualties, monitor combat, train and identify healthy soldiers
for missions. The garments are comfortable, allowing soldiers to carry out missions without interference.
Novelty
Electronics in garments also have potential that reaches beyond these markets, expanding into leisure and
novelty consumer items. In 2012, UK mobile phone provider Orange backed a T-shirt design which used
piezoelectrics to provide charge to a mobile phone. The clothing would use sound waves, the idea being that
wearers could charge mobile devices at a festival or anywhere without a plug-in point. While the panel would
not be washable, it would be removable, allowing for washing.
Similar to the Orange T-shirt, the Electronic Drum Machine shirt from ThinkGeek includes different pads that
will allow for nine different drum kits to be played. Users are able to record up to three minutes of sound and
play it back through an amp, which can be connected to the shirt via a jack plug. The electronics are removable,
allowing easy washing of the shirt.
Wearables toolkit
To fully realise the potential of wearable electronics, a toolkit of robust, flexible components will be needed.
Current approaches, such as a rigid device fitted into a wristband, could progress to fully integrated, conformal
products. Thanks to the progress being made by technology developers, various elements wearable electronic
of this flexible electronics toolkit are now close to market, as developers partner with high-profile brand owners
and manufacturers on product designs.
One company developing such flexible technology is Plastic Logic. Once known for its work in the early ereader market, the UK company has successfully refocused to being a technology provider that works with
partners to enable multiple end applications. The company has demonstrated an array of such applications for
robust, flexible displays, in everything from smartphone accessories to large-area digital signage.
The company has a production facility in Dresden, Germany, delivering flexible displays using organic thinfilm transistors (OTFTs) at high yields. These backplanes have been produced containing 1.2 million
transistors. Displays have been tested extensively for bendability, among other relevant features for robust and
potentially wearable electronics.
Based on this expertise, Plastic Logic is now working with big-name firms in consumer electronics and other
fields on a number of commercial demonstrator projects for wearable devices. The company has already
demonstrated the potential of genuinely flexible displays in a device that included colour layers and a
completely flexible substrate wrapped around a wrist.
Many other exciting developments are underway to realise the flexible electronics needed for truly wearable
devices. The first generation of smart watches may fall short as a demonstration of what wearable electronics
can achieve, but they have proved a consumer appetite for wearable electronics. With some commercial
launches on the horizon, a revolution of truly wearable electronics could soon be underway.
Flexible electronics would help transform accessories like smart watches into conformal, convenient, all-inone devices.

Audrius Kniuras RT-3

Why Replacing The Fibre Optic Patch Lead


Often Fixes Network Problems
30TH DECEMBER 2014 BY GREG FERRO

When installing Fibre Optic cable care must be taken to ensure that cable is not bent,
stretched or deformed.The best case is that the fibre core will break and will become
faulty, the worst case is that the fibre optic core will be deformed or damaged and cause
signal distortion that results in intermittent faults.

Two Types of Fibre Optic Cable


In data networking, two type of cable are in common use single-mode (SMF) and
multimode (MMF). The core is embedding in a layer of cladding that helps to protect and
strengthen the cable. The cladding also contains a reflective layer that is critical to laser
propagation.

Fibre Breakage
The glass core in a fibre optic cable is fragile. It is slightly thicker than a human hair but
made of glass for single mode and plastic for multi-mode. Manufacturers have been able
to design and manufacture the core material to be somewhat elastic and resilient to
bending. Single mode uses special type of glass that is extruded into a solid medium to
protect it. MMF is made from glass but being thicker (at 125m compared to 9m) is more
robust. Because of this SMF is more sensitive to breakage than MMF.
When the core is stretched or bent beyond a certain point the core will physically break
into two parts. In this case, two possible outcomes are possible. The best case is that two
pieces of core are not physically aligned and no laser light will propagate.
A less common case is that the two core will be partially aligned after the break and pass a
partial signal. This may or may not work. Intermittent operation may happen as the cable
expands/shrinks with temperature, vibration or movement.

A fibre optic cable relies on complete internal reflection and this scenario still supports that
but a substantial amount of signal power can be lost at this interface plus reflections can
further issues.

Cracking of the Fibre Core


It is also possible that the fibre core might be damaged instead of a break. For SMF the
glass might only crack and cause imperfection in the medium that would reduce signal
propagation and causes reflections. For MMF the is more likely to be damaged by the
flexing and cause power loss. In the case graded fibre, this is quite damaging to the signal
as it will be badly distorted.

Best and Worse Cases


When working with fibre optic patch leads, it is common for people to trap them in door or
stretch them by pulling on them. While patch leads are designed to be more flexible
compared the cabling used in risers, it is still susceptible to breakage in the best case.

Best case means that the cable doesnt work. Worst case is when the fibre core is
partially damaged and likely to cause intermittent operation.
Intermittent is much worse than broken. Another reason for replacing cables is that the
fibre connectors are dirty, scratched or faulty and new cables will improve the power level
received.
In my opinion, thats why replacing the cable often fixes intermittent problems in network. It is very
hard to test or confirm a faulty fibre optic patch lead. A faulty patch lead can cause
waveform deformation through propagation distortion or power loss through a cracked or
misaligned core. And according to the temperature fluctuation, ambient vibration by fans
and coolers, or just being moved the network might have problems.

Words:
1. Intermittent - not happening regularly or continuously; stopping and starting repeatedly or
with periods in between:
a) intermittent rain
2. Cladding - material that covers the surface of something and protects it:
a) The pipes froze because the cladding had fallen off.
3. Embed - to fix something firmly into a substance.
4. Propagation - to create (an effect) at a distance, as by electromagnetic waves, compression
waves, etc., traveling through space or a physical medium; transmit:
a) to propagate sound.
5. Extrude - to form something by forcing or pushing it out, especially through a small opening.
6. Susceptible - easily influenced or harmed by something.
7. Ambient - (especially of environmental conditions) existing in the surrounding area.
8. Attenuation - to make something smaller, thinner, or weaker:
a) Radiation from the sun is attenuated by the Earth's atmosphere.
9. Mezzanine - a small extra floor between one floor of a building and the next floor up.
10. Congested - too blocked or crowded and causing difficulties.
11. Traverse - to move or travel through an area.
12. Dark fiber - optical fiber infrastructure that is currently in place but is not being used. Often
times companies lay more lines than what's needed in order to curb costs of having to do it
again and again.

Google could bypass its 'Fiber' for ultra-fast wireless


By Ryan Daws
16 October 2014, 16:04 p.m.

If there's a company whose best interest is to get as many people online as possible, it's
Google. The web giant knows this more than anyone, and has already launched its wellreceived high-speed ISP service in select US-based locations where demand is high.
Google appears to be testing new technology which will bypass the physical fiber cables
required for their high-end internet service in favour of an ultra-fast wireless version,
according to telecommunication experts who have scrutinised the company's regulatory
filings.
On Monday, Google filed an application with the U.S. Federal Communications
Commission asking the agency for permission to conduct tests in California across
different wireless spectrums. The filing is heavily-redacted, but it requests the use of
rarely-used millimeter-wave frequency capable of transmitting large amounts of data.
We assume that Google plans to work alongside Verizon for the test, after a report from
The Information earlier this year which detailed Google's ambitions to bring wireless
service to areas where it offers fiber optic broadband at the moment in partnership with the
telecoms giant.
If Google finds success with their tests, it could be a much cheaper option to rolling-out a
high-speed network than digging up roads and laying cables to each individual home. Not
only would it be more cost-effective, but it could enable Google to bring its services to far
more places in a much-shorter time.
The tests will begin on November 13th all going to plan and will include three sites in
the San Francisco Bay Area, including one in San Mateo, and two locations a half-mile
apart which appear to be on Googles Mountain View-based campus. The test will use
radio transmitters operating in the 5.8 GHz frequency, the 24.2 GHz frequency, and in the
millimeter wave bands of 71-76 GHz and 81-86 GHz.
Of course Google, like most innovative companies, will continue to test various
technologies whether there is a serious plan to make it commercially-available or not.
Google does have a better reputation of projects making it out of its labs than most
companies, however, with Glass and even Android itself starting off life in 'Google X'.
What makes the theory Google is serious about this test even more credible is that the
FCC application was signed by none other than Craig Barratt; the executive in charge of
Google's fiber plans. Take of that what you will.

Kaunas University of Technology


Student: Modestas Mekauskas RTE-3 gr.
Lecturer: Evelina Jaleniauskien

Intelligent speed adaptation


Intelligent Speed Adaptation (ISA), also known as Intelligent Speed Assistance, Speed
Alerting, and Intelligent Speed Authority, is any system that constantly monitors vehicle speed
and the local speed limit on a road and implements an action when the vehicle is detected to be
exceeding the speed limit. This can be done through an advisory system, where the driver is
warned, or through an intervention system where the driving systems of the vehicle are
controlled automatically to reduce the vehicles speed.
Intelligent speed adaptation uses information about the road on which the vehicle travels to
make decisions about what the correct speed should be. This information can be obtained
through use of a digital maps incorporating roadway coordinates as well as data on the speed
zoning for that roadway at that location, through general speed zoning information for a defined
geographical area (e.g., an urban area which has a single defined speed limit), or through feature
recognition technology that detects and interprets speed limit signage. ISA systems are designed
to detect and alert a driver when a vehicle has entered a new speed zone, when variable speed
zones are in force (e.g., variable speed limits in school zones that apply at certain times of the
day and only on certain days), and when temporary speed zones are imposed (such as speed limit
changes in adverse weather or during traffic congestion, at accident scenes, or near roadworks).
Many ISA systems will also provide information about locations where hazards may occur (e.g.,
in high pedestrian movement areas, railway level crossings or railroad grade crossings, schools,
hospitals, etc.) or where enforcement actions is indicated (e.g., speed camera and red light
camera locations). The purpose of ISA is to assist the driver in keeping to the lawful speed limit
at all times, particularly as they pass through different speed zones. This is particularly useful
when drivers are in unfamiliar areas or when they pass through areas where variable speed limits
are used.
Research has found that that, in urban areas, the risk of a casualty crash is doubled for each
5 km/h over the limit. So travelling at 70 km/h in a 60 km/h zone quadruples the risk of a crash
in which someone is hospitalised. As a result, it is estimated that about 10% of casualties could
be prevented if the large group of motorists who routinely travel at up to 10 km/h over the limit
were encouraged to obey the speed limits. About 20% of casualties could be prevented if all
vehicles complied with the speed limits. Savings in fatal crashes would be larger.
"Minor" speeding therefore makes up a large proportion of preventable road trauma. It is
difficult for enforcement methods alone to have an effect on this minor speeding. An added
problem is that even motorists who want to obey the speed limits (to keep their life, license or
livelihood) have difficulty doing so in modern cars on city roads. This is where an ISA system
comes into its own.
Types of ISA (Active/Passive)
The two types of ISA systems, passive and active, differ in that passive systems simply warn
the driver of the vehicle travelling at a speed in excess of the speed limit, while active systems

intervene and automatically correct the vehicles speed to conform with the speed limit. Passive
systems are generally driver advisory systems: they alert the driver to the fact that they are
speeding, provide information as to the speed limit, and allow the driver to make a choice on
what action should be taken. These systems usually display visual or auditory cues, such as
auditory and visual warnings and may include tactile cues such as a vibration of the accelerator
pedal. Some passive ISA technology trials have used vehicle modified to provide haptic
feedback, wherein the accelerator pedal becomes more resistant to movement (i.e., harder to
push down) when the vehicle travels over the speed limit. Active ISA systems actually reduce or
limit the vehicles speed automatically by manipulating the engine and/or braking systems. Most
active ISA systems provide an override system so that the driver can disable the ISA, if
necessary, on a temporary basis. Such a feature is thought to enhance both acceptance and safety,
but leaves a significant amount of speeding unchecked.
An often unrecognised feature of both active and passive ISA systems is that they can serve as
on-board vehicle data recorders, retaining information about vehicle location and performance
for later checking and fleet management purposes.
Speed and location determining/verification technology
There are four types of technology currently available for determining local speed limits on a
road and determining the speed of the vehicle. These are:
GPS
Radio Beacons
Optical recognition
Dead Reckoning
GPS is based on a network of satellites that constantly transmit radio signals. GPS receivers
pick up these transmissions and compare the signals from several satellites in order to pinpoint
the receivers location to within a few meters. This is done by comparing the time at which the
signal was sent from the satellite to when it was picked up by the receiver. Because the orbital
paths of the satellites are known very accurately, the receiver can perform a calculation based on
its distance to several of the orbiting satellites and therefore obtain its position. There are
currently 24 satellites making up the GPS network, and their orbits are configured so that a
minimum of five satellites are available at any one time for terrestrial users. Four satellites is the
minimum number of satellites required to determine a precise three-dimensional position.
The popularity of GPS in current ISA and in car navigation systems may give the impression that
GPS is flawless, but this is not the case. GPS is subject to a number of fundamental problems.
Many of these problems relate to the accuracy of the determined position. The receiver still gets
the signal from the satellites, but due to satellites' ephemeris uncertainties, propagation errors,
timing errors, multiple signal propagation path, and receiver noises, the position given can be
inaccurate. Usually these inaccuracies are small and range from five to ten meters for most
systems, but they can be up to hundreds of meters. In most situations this may not matter, but
these inaccuracies can be important in circumstances where a high speed road is located
immediately adjacent to roads with much lower speed limits (e.g., residential streets).
Furthermore, because GPS relies upon a signal transmitted from a satellite in orbit, it does not
function when the receiver is underground or in a tunnel, and the signal can become weak if tall
buildings, trees, or heavy clouds come between the receiver and the satellites. Current

improvements being made to the GPS satellite network will help to increase GPS reliability and
accuracy in the future but will not completely overcome the fundamental shortcomings of GPS.
In order to be used for ISA systems, GPS must be linked to a detailed digital map containing
information such as local speed limits and the location of known variable speed zones, e.g.,
schools. Advanced digital maps have the capacity for real-time updating to include information
on areas where speed limits should be reduced due to adverse weather conditions or around
accident scenes and roadworks.
Roadside radio beacons, or bollards, work by transmitting data to a receiver in the car. The
beacons constantly transmit data that the car-mounted receiver picks up as it passes each beacon.
This data could include local speed limits, school zones, variable speed limits, or traffic
warnings. If sufficient numbers of beacons were used and were placed at regular intervals, they
could calculate vehicle speed based on how many beacons the vehicle passed per second.
Beacons could be placed in/on speed signs, telegraph poles, other roadside fixtures, or in the road
itself. Mobile beacons could be deployed in order to override fixed beacons for use around
accident scenes, during poor weather, or during special events. Beacons could be linked to a
main computer so that quick changes could be made.
The use of radio beacons is common when ISA systems are used to control vehicle speeds in
off road situations, such as factory sites, logistics and storage centers, etc., where occupational
health and safety requirements mean that very low vehicle speeds are required in the vicinity of
workers and in situations of limited or obscured visibility.
Optical recognition systems. So far, this technology has been focused solely on recognizing
speed signs or road markings. However, other roadside objects, such as the reflective "cats eyes"
that divide lanes could possibly be used. This system requires the vehicle to pass a speed sign or
similar indicator and for data about the sign or indicator to be registered by a scanner or a camera
system. As the system recognizes a sign, the speed limit data is obtained and compared to the
vehicles speed. The system would use the speed limit from the last sign passed until it detects
and recognizes a speed sign with a different limit. If speed signs are not present, the system does
not function. This is a particular problem when exiting a side road onto a main road, as the
vehicle may not pass a speed sign for some distance.
Dead reckoning (DR) uses a mechanical system linked to the vehicles driving assembly in
order to predict the path taken by the vehicle. By measuring the rotation of the road wheels over
time, a fairly precise estimation of the vehicles speed and distance traveled can be made. Dead
reckoning requires the vehicle to begin at a known, fixed point. Then, by combining speed and
distance data with factors such as the angle of the steering wheel and feedback from specialized
sensors (e.g., accelerometers, flux gate compass, gyroscope) it can plot the path taken by the
vehicle. By overlaying this path onto a digital map, the DR system knows approximately where
the vehicle is, what the local speed limit is, and the speed at which the vehicle is traveling. The
system can then use information provided by the digital map to warn of upcoming hazards or
points of interest and to provide warnings if the speed limit is exceeded. Some top-end GPSbased navigation systems currently on the market use dead reckoning as a backup system in case
the GPS signal is lost. Dead reckoning is prone to cumulative measurement errors such as
variations between the assumed circumference of the tyres compared to the actual dimension
(which is used to calculate vehicle speed and distance traveled). These variations in the tyre
circumference can be due to wear or variations in tyre pressure due to variations in speed,
payload, or ambient temperature. Other measurement errors are accumulated when the vehicle
navigates gradual curves that inertial sensors (e.g., gyroscopes and/or accelerometers) are not

sensitive enough to detect or due to electromagnetic influences on magnetic flux compasses (e.g.,
from passing under power lines or when travelling across a steel bridge) and through underpasses
and road tunnels.
Limitations
An initial reaction to the concept of ISA is that there could be negative outcomes, such as
driving at the speed limit rather than to the conditions, but numerous ISA trials around the World
have shown these concerns are unsubstantiated. A particular issue is that most ISA systems use
a speed database based purely on information regarding the posted maximum speed limit for a
roadway or roadway segment. Obviously, many roads have features such as curves and gradients
where the appropriate speed for a road segment with these features is less than the posted
maximum speed limit. Increasingly, road authorities indicate the appropriate speed for such
segments through the use of advisory speed signage to alert drivers on approach that there are
features which require a reduction in travelling speed. It is recognised that the speed limit
databases used in ISA systems should ideally take account of posted advisory speeds as well as
posted maximum speed limits. The New South Wales ISA trial, underway in the Illwarra region
south of Sydney currently, is the only trial that is using posted advisory speeds as well as posted
maximum speed limits.
Some car manufacturers have expressed concern that some types of speed limiters "take
control away from the driver". This is also unsubstantiated, firstly because ISA systems do have
provision for over-ride by the driver in the event that the set speed is inappropriate and secondly,
the claim is somewhat hypocritical given that cruise control has been in use on vehicles for many
years and forces the vehicle to travel at a minimum speed unless there is driver intervention.
For some traffic safety practitioners, active intelligent speed adaptation is thought to be an
example of 'hard automation', an approach to automation that has been largely discredited by the
Human Factors community. An inviolable characteristic of human users is that they will adapt to
these systems, often in unpredictable ways. Some studies have shown that drivers 'drive up to the
limits' of the system and drive at the set speed, compared to when they are in manual control,
where they have been shown to slow down. Conversely, the experience of some drivers with
driving under an active ISA system has been that they find they can pay more attention to the
roadway and road environment as they no longer need to monitor the speedometer and adjust
their speeds on a continuing basis.
There is also concern that drivers driving under speed control might accept more risky
headways between themselves and vehicles in front and accept much narrower gaps to join
traffic (this fact drawing particular criticism from motorcycling groups).
Wider criticism also comes from the insistent focus on speed and that road safety outcomes
could be better achieved by focusing on driving technique, situational awareness, and automation
that 'assists' drivers rather than 'forces' them to behave in particular ways. Intelligent speed
adaptation has also been held as an example of a technology which, likespeed cameras, can often
alienate the driving public and represents a significant barrier to its widespread adoption.
Some studies which pre-date the development of ISA systems indicated that drivers make
relatively little use of the speedometer and instead use auditory cues (such as engine and road
noise) to successfully regulate their speed. These studies, however, remain unverified. There is
an argument in the literature that suggests that as cars have become quieter and more refined
speed control has become more difficult for drivers to perform. Thus an alternative 'soft-

automation' approach is simply to re-introduce some of those cues that drivers naturally use to
regulate speed (rather than incur the expense and unexpected behavioral adaptations of ISA).
Benefits
RTA (NSW Australia) ISA trial results showed the benefits of ISA are improved speed zone
compliance with reduction in the level and duration of speeding.
A Cost Benefit Analysis of ISA (in Australia) Published in April 2010 by the Centre for
Automotive Safety Research suggested advisory ISA would reduce injury crashes by 7.7% and
save $1,226 million per year. These figures were 15.1% and $2,240 million for supportive ISA
and 26.4% and $3,725 million for limiting ISA.
The confirmation by the Australian research of the benefits of ISA have resulted in the
recommendation for wider adoption and promotion of ISA in the Australian National Road
Safety Strategy 2011-2020
Real and perceived benefits of ISA are a reduction of accident risks and reductions of
noise and exhaust emissions.
Commercial use
Some road safety researchers are surprised that Australia is leading the world with this
technology. Australia's advanced commercialisation of ISA has in part been underpinned by
initiatives from the various state roads authorities, and the inclusion of ISA in the National and
State Road Safety Strategies.
SpeedAlert is a passive ISA product marketed by Smart Car Technologies, based in Sydney
NSW. It offers full national speed zoning information embedded within a GPS-based navigation
system, providing drivers with information on speed limits and vehicle speed, as well as related
information on locations such as schools, railway level crossings, speed camera sites, etc.. The
fleet solution selling for about A$200, a free consumer version 'SpeedAlert Live' for iPhone was
released on 22 July 2012 in the Australian iTunes app store.
SpeedShield is an active ISA product marketed by Automotion Control Systems, based in
Melbourne, Vic. It offers speed zoning information embedded within a GPS-based navigation
system, providing drivers with information on speed limits and vehicle speed and is combined
with technology that intervenes and controls the vehicle speed to no faster than the posted speed
limit for that section of roadway. The technology is generally transferrable across vehicle
manufacturers and models, but must be configured for an individual make and model. As the
cost is variable (estimated to be A$13,000 depending on vehicle type and number of vehicles to
be fitted), its commercial use has tended to be into vehicle fleet operations rather than private
owners.
Coredination ISA is a passive ISA product marketed by Coredination, based in Stockholm,
Sweden. This product is built as a smartphone-application for Android and iPhone. It offers full
national speed zoning information, providing drivers with information on speed limits and
vehicle speed. The product is very lightweight and no separate hardware or fixed installations are
necessary.
Government implementation

As of 2013 adoption of the technology was being considered by the European


Commission but was being strongly opposed by UK transport secretary, Patrick McLoughlin. A
government spokesman describe the proposal as "Big Brother nannying by EU bureaucrats."

1.
2.
3.
4.
5.
6.
7.

Implementation - the process for putting a design, plan or policy into effect.
Signage - the design or use of signs and symbols.
To impose - to establish or apply as compulsory.
Traffic congestion - traffic jam.
Conform - to be or act in accord with a set of standards, expectations, or specifications.
Haptic - relating to or based on the sense of touch.
Fleet management is the management of a company's transportation fleet. Fleet management includes comme
rcial motor vehicles such as cars ships, vans and trucks, as well as rail cars.
8. Ephemeris - celestial navigation, also known as astronavigation, is a position fixing
technique.
9. Obscure - not readily noticed or seen.
10. "Cats eyes" - the cat's eye is a retroreflective safety device used in road marking and was
the first of a range of raised pavement markers.
11. To intervene to involve oneself in a situation so as to alter or hinder an action or development.

Paulius Motekaitis RTE-3

Why the electric car has no (wireless) future

The electric car is not a technology of the future, but from the past.
The electric car is 170 years old. This may sound surprising, but e-cars predate automobiles with a
combustion engine. They were driven out of the market in the beginning of the 20th century
because petrol engines had significantly better mileage. One century later, the electric car still faces
the same fundamental problems. Furthermore, the need for batteries makes them eco-unfriendly
by nature. The only possible green future for electric cars is a wired future: hooked up to the
overhead lines, like trolleybuses and bumper cars.
---------------------------------------------------------------------------------------------------------------------------------------------

"Like electric cars, the environmental score of a trolleybus depends on


the way the electricity was generated. However, a trolleybus does not
face the problem of energy storage."
--------------------------------------------------------------------------------------------------------------------------------------------Car manufacturers want us to believe that the automobile will soon become a completely harmless
technology, comparable to a bicycle or a pair of roller skates - because the car of the near future will
run on electricity. Hybrid cars which combine an electromotor with a combustion engine are
already a hype and fully electric cars are not far behind. Unfortunately, most electricity is produced
by means of fossil fuels. That means electric cars will not lower traffic emissions, apart from doing
so in the statistics: the air pollution and the CO2 that are now attributed to road traffic, will then be
filed under energy production.

A power plant has a higher efficiency than a cars engine, and an electric motor has a higher
efficiency than a combustion engine. Herein lies a potential advantage for the environment, even if
the electricity needed to power electric cars is generated by fossil fuels. And lets be optimistic for
once and assume that we will have a 100 percent green electricity infrastructure in 10 or 20 years
time (lets call that hugely optimistic, since its already an enormous challenge to green the existing
infrastructure and a massive introduction of electric cars means we have to build many more power
plants).
6,000 batteries
So, then, we have green cars, right? Alas, no. The electric car has a serious environmental drawback
compared to a car running on a combustion engine: the battery. The fuel tank of an electric car
consists of hundreds of connected batteries, each of them comparable to the battery of a mobile
phone (the Tesla Roadster, an electric sports car, has more than 6,000 of them). After some years,
they all have to be replaced, and already before that time there is a reduction in storage capacity.
The environmental profit gained by a higher efficiency (or by green electricity) will be negated
completely by the massive amount of batteries required. Batteries have to be manufactured, and that
process is very energy-intensive and environmentally harmful. Batteries also have to be discarded
or recycled - both processes again require extra energy and inflict environmental harm.

Mileage
Batteries are the flaw of electric cars, and not only when considering the environment. Electric cars
are not yet a reality because of the limited mileage of their fuel tanks. At best, an electric car can

drive 100 miles. After that, the car has to be plugged in for hours. There are several possible ways
to get around that. We could set up a system where a driver can swap batteries at a fuel station, or
even a system where a driver could swap cars.
Another way to improve mileage is to make cars lighter and slower as opposed to the present
trend, with cars getting heavier and faster year after year. In its present form and use, however, the
car is inconceivable without the combustion motor. And thats nothing new. The same fundamental
problems were the reason that electric and hybrid cars were driven out of the market at the
beginning of the 20th century. (Picture: The Jamais Contente, an electric car from 1899)
170 years old
It may sound surprising, but the first electric car was invented around 1835, half a century earlier
than the first gasoline powered car. In the late 19th and early 20th century electric cars were more
popular than gasoline (and steam) powered cars (see this overview of early electrics). In 1899, an
electric car was the first to break the speed barrier of 100 km/h (62 mph). The Jamais Contente
(picture above), as it was called, weighed 1,500 kilograms and the two batteries made up half of
that weight. In 1916, the first hybrid car was invented, equipped with both an internal combustion
engine and an electric motor. The Toyota Prius is nothing new.
The explanation for the success of the electric car was the fact that it did not require a manual effort
to start the engine, and that gear changes were not required. Gasoline powered cars needed the use
of a hand crank to start, and the need to shift gears made driving more difficult. Moreover, since
most driving was done over short distances in town, as elsewhere there were no good roads, the
limited mileage was not a problem. The expansion of the road network, however, and the invention
of the starter motor for gasoline cars, made an end to the success of the electric car. Around 1930, it
disappeared almost completely. It resurged only in the sixties and seventies, when environmental
concerns arose.
Trolleybus & trolleytruck
However, only the electric passenger car disappeared. Around the same time that the popularity of
the electric car started to decline, networks of trolleybuses appeared in cities around the world. A
trolleybus has an electric motor, but it needs no battery since it gets its electricity from the overhead
cables like an electric train or a tram (streetcar). It is cheaper than a tram because there is no need
for rails. It is also slightly more flexible a badly parked car will not stop it.
Trolleybus lines were built all around the world, and in 360 cities they are still operating today.
Although much less widespread than trolleybuses, trolleytrucks were used too (especially in
Russia). They made use of the existing trolleybus infrastructure (like this van in Barcelona, picture
below) or they used a specially built infrastructure (like the line in Ukraine, picture above).

In most countries, trolleybuses disappeared in the fifties and the sixties to make way for the diesel
bus, which was cheaper (and more flexible). The technology, however, is very noteworthy. Like
electric cars, the environmental score of a trolleybus depends on the way the electricity was
generated. However, a trolleybus (just like a tram or an electric train) does not face the problem of
energy storage. Therefore, the problem with the electric car is not that its electric, but that its
wireless.
Trolleycars?
Would it be possible to design wired electric passenger cars? The only wired passenger cars that
exist today are the bumper cars on the fun fair. Technically, its perfectly possible to build a similar
widespread network for wired passenger cars, based on the technology of bumper cars or
trolleybuses. There is no new high-tech needed for that. In fact, the basic concept of bumper cars
was originally designed as a method of transporting goods and people.
However, it would mean overhead lines or grids as far as you can see, if we would want to keep the
absolute freedom of movement of the passenger car. Such a system would be easier to apply on
highways, but that would mean that all cars have to maintain the same speed and that they are not
able to pass one another unless the grid system of bumper cars is copied.
Either way, such systems raise the question why we do not use trams or trains instead.
Conceptual problem?
The technological problems that electric cars have faced for more than a century now, compared to
the smooth operation of electric trains, trams, trolleybuses and bumper cars, may mean that the
wireless electric car simply has no future. Its problem might not be technological, but conceptual.
Certainly, better batteries will improve the mileage and the charging time of electric cars, but its
hard to see how they can do that without harming the environment. While the sole reason for the
existence of electric cars today is that they are better for the environment.

Combustion engine - a type of engine used in most cars that produces power by burning petrol/gas
or other fuel inside
Herein - in this place, document, statement or fact
Inflict - to make somebody/something suffer something unpleasant.
Hand crank - a bar and handle in the shape of an L that you pull or turn to produce movement in a
machine.
The expansion - an act of increasing or making something increase in size, amount or importance.
Concern it means to worry somebody
Arose - to happen as a result of a particular situation.
Widespread existing or happening over a large area or among many people
Trolleytrucks - a truck driven by electricity from a cable above the street
Noteworthy - deserving to be noticed or to receive attention because it is unusual, important or
interesting

Sole - only; single


Internal - connected with the inside of something
Resurged - becoming stronger or more popular again

Karolis Navickas RTE-3

ESP:
Electronic Stability Program
If your vehicle has ESP on board, it provides you with two other active safety systems: the
Antilock Braking System ABS and the Traction Control System TCS. ABS prevents the wheels
from locking during braking; TCS prevents the wheels from spinning when starting off and
accelerating. While ABS and TCS intervene on a vehicles longitudinal dynamics, ESP
additionally improves the lateral dynamics, thus ensuring stable driving in all directions.
ESP different names for the same safety benefit
80 percent of vehicle manufacturers in Europe use the acronym ESP for the Electronic Stability
Program. Some carmakers market the ESP under different names, such as DSC (Dynamic
Stability Control), VSA (Vehicle Stability Assist) or VSC (Vehicle Stability Control). The
functionality and operation of the ESP, as well as the gain it provides in driving safety, is the
same.

How does ESP work?


Skidding is one of the main causes of road crashes. International studies show that at least 40
percent of all fatal traffic crashes are caused by skidding. ESP could prevent up to 80 percent of
all skidding crashes. ESP recognizes if skidding is imminent and intervenes very quickly. The
driver stays in control of the vehicle and does not get into a skid provided that the physical limits
are not exceeded.
ESP is always active. A microcomputer monitors the signals from the ESP sensors and checks
25 times a second, whether the drivers steering input corresponds to the actual direction in which
the vehicle is moving. If the vehicle moves in a different direction ESP detects the critical
situation and reacts immediately independently of the driver. It uses the vehicles braking system
to steer the vehicle back on track. With these selective braking interventions ESP generates the
desired counteracting force, so that the car reacts as the driver intends. ESP not only initiates
braking intervention, but can also intervene on the engine side to accelerate the driven wheels. So,
within the limits of physics, the car is kept safely on the desired track.
ABS, TCS, and ESP were all introduced first to the market by Bosch.

Example:

Technical specifications
Components of the Electronic Stability Program ESP from Bosch

1.
2.
3.
4.
5.

ESP-Hydraulic unit with integrated Engine Control Unit (ECU)


Wheel speed sensors
Steering angle sensor
Yaw-rate and lateral-acceleration sensor
Communication with engine management ECU

Hydraulic unit with attached control unit

The hydraulic unit executes the commands from the control unit and regulates, via solenoid valves,
the pressure in the wheel brakes. The hydraulic modulator is the hydraulic connection between the
master cylinder and the wheel cylinders. It is located in the engine compartment. The control unit
takes over the electrical and electronic tasks as well as all control functions of the system.
Wheel-speed sensor

The control unit uses the signals from the wheel-speed sensors to compute the speed of the wheels.
Two different operating principles are used: passive and active wheel-speed sensors (Inductive and
Hall-effect sensors). Both measure the wheel speed in a contact-free way via magnetic fields.
Nowadays active sensors are mostly employed. They can identify both the direction of rotation and
the standstill of a wheel.
Steering-angle sensor

The task of the steering-angle sensor is to measure the position of the steering wheel by determining
the steering angle. From the steering angle, the vehicle speed and the desired braking pressure or the
position of the accelerator pedal, the driving intention of the driver is calculated (desired state).
Yaw-rate and lateral-acceleration sensor

A yaw-rate sensor registers all the movements of the vehicle around its vertical axis. In combination
with the integrated lateral-acceleration sensor, the status of the vehicle (actual state) can be
determined and compared with the drivers intention.
Communication with engine management

Via the data bus, the ESP control unit is able to communicate with the engine control unit. In this
way, the engine torque can be reduced if the driver accelerates too much in certain driving
situations. Similarly, it can compensate for excessive slip of the driven wheels provoked by the
engine drag torque.

imminent- (especially of something unpleasant) likely to happen very soon


intervenes- to become involved in a situation in order to improve or help it
counteracting- to do something to reduce or prevent the bad or harmful effects of something
intervention- action taken to improve or help a situation
principles- a moral rule or a strong belief that influences your actions
determining- to discover the facts about something; to calculate something exactly
torque- a twisting force that causes machinery
longitudinal- going downwards rather than across
excessive- greater than what seems reasonable or appropriate
provoked- to cause a particular reaction or have a particular effect
compartment- one of the separate sections which a coach/car on a train is divided into
ensuring- to make sure that something happens or is definite

Simonas Pilibaitis RTE-3

51

New words

1. Forerunner - predecessor; ancestor; precursor. a person who appears in advance to announce the coming of someone
or something else;

2. Observe - to notice
3. Incandescent emitting light as a result of being heated to a high temperaledature;
4. Filament (Electrical Engineering) the thin wire, usually tungsten, inside a light bul
b that emits light when heated toincandescence by an electric current
5. Oscillate - to move or swing from side to side regularly .

6. Rectify - to put right; correct; remedy.


7. Exploit - to make the best use of:
8. Conceive - To form or develop in the mind:
9. Crackle - To make a succession of slight sharp snapping noises
10. Attenuation - To reduce in force, value, amount, or degree; weaken.
11. Fidelity - accuracy in reporting detail.
12. Emanate - to send forth; emit.

http://www.ias.ac.in/resonance/Volumes/07/01/0053-0063.pdf

Rytis Pilkauskas RTE-3

INTEGRATED AUTOMOTIVE SENSORS


INTRODUCTION
THE spinoff of Department of Defense products in the 1980s and 1990s has brought military
technologies into the daily life of the average consumer in many ways. Examples include previously
classified Desert Storm electrooptical sensors that are now available on high-end model Cadillacs,
handheld global positioning systems (GPSs) for the weekend hiker or sailor, satellite
communications systems that bring hundreds of television channels into the home, and
multiwavelength and graded index optical fibers for high-speed Internet connections. Military radar
technology has been available to the commercial market for many years in air traffic control, high
seas, weather, wind shear detection, small boat, and police applications. We are currently on the
verge of an era in which radar use, technology, and terminology will become part of the everyday
world, just as the personal computer has permeated our world today.
Commercial radars extract from their military origins the essential building blocks, algorithms,
hardware, and software needed to provide the consumer with a radar that may have less
performance, but will still provide useful and desired functionality at an affordable price.
Improvements in these units due to the large volume and marketplace pressures can then, in turn,
provide cost savings at the component/subassembly level to both military and commercial markets.

AUTOMOTIVE RADAR SENSORS


Automotive radar sensors have been in development since the early 1970s as aids in detecting
objects ahead of, adjacent to, and behind a vehicle. Simple backup sensors and side-looking blindspot detection sensors are commercially available today. With the recent addition of a forwardlooking radar system (FLRS) sensor to this suite, it is now possible to provide a full 360 envelope
of coverage around the vehicle.
The automotive FLRS is by far the most complex, but promises to be the most revolutionary of
these developments. Automotive forward-looking radar provides a driver with a precise all-weather
representation of the roadway environment ahead to beyond 100 m (and to the side), allowing cruise
control to take on another dimension, automatically regulating itself with the relative motion of
traffic. It can also provide detection and warning, and perhaps some degree of vehicle control.
The near object detection system (NODS) is a shorter-range radar sensor that is positioned in
multiple locations around the vehicle to detect cars, motorcycles, bicycles, and people in the
vicinity of the vehicle, for blind-spot detection, and as an aid in parking and backing up.
These automotive radar sensors are exposed to extreme temperatures, salt spray, shock, vibration,
and abuse that in many ways are similar to the military conditions that the radar industry is
accustomed to. They must tolerate these conditions while demonstrating high precision and
unquestioned reliability. To be marketable, they must be priced for the consumer, have an outward
physical appearance that does not violate the sensibilities of the automobile stylists, and must be
easy for the consumer to use and understand. With the rapid development of monolithic microwave
integrated circuits (MMICs), application-specific integrated circuits (ASIC), and data-processing
technology over the last decade, however, the realization of inexpensive, yet sophisticated and
reliable systems of this nature, in quantities of 1000s, are upon us.

FLRS
Raytheon's FLRS is a bistatic, frequency-modulated continuous wave (FM/CW) radar operating at
77 GHz with electronic scanning capability of nine beams and detection range of beyond 100 m.
The functions of the nine beams include two for self-alignment, diagnostic tests, and overpass
detection, and seven for vehicle tracking. Embedded within the unit is software capable of path
prediction, discrimination between moving and stopped objects, overhead object discrimination
(overpasses, signs, etc.), and multipath mitigation. The unit is housed in a single box that
accommodates the transmit and receive antenna as well as the RF circuitry and digital-processing
devices.
FLRS sensors have been conceived as components currently used in intelligent cruise control. In
this application, the sensor must be capable of accurately and reliably detecting objects (vehicles,
pedestrians, and obstacles) in the forward path of the host vehicle. A fundamental requirement is
that they have to detect objects in the path of the host vehicle as either the object or host changes
direction and moves out of the host's direct line of travel. As the host vehicle approaches a curve in
the roadway, prior to actually turning itself, the sensor must maintain track on all the forwardlocated objects in its field of view (FOV) and understand that if they are all moving in the same
angular direction, it is the roadway that is curving, and not a particular object changing lanes. With
the host vehicle and the objects both in the curve, the FLRS must maintain track on the lead vehicle
in the same lane, even if it is not in the direct line of the host vehicle. This is called path prediction.
The path prediction approach that has been developed by Raytheon Electronic Systems, Sudbury,
MA, includes an FLRS for collecting range, angle, and velocity data information on objects within a
FOV in front of the vehicle, measuring systems for providing vehicle speed and yaw rate data, and a
processing system responsive to the forward-looking sensor and measuring systems. The
methodology consists of: 1) calculating an estimated path of the vehicle based on its speed and yaw
rate; 2) calculating estimated paths for each of the objects; 3) determining the lateral distance of
each object from the predicted path of the vehicle; and 4) classifying each object as either in or out
of the highway lane of the vehicle.
Characteristic of the FM/CW waveform used is the generation of highly linear up and down
frequency ramps. Tracking objects in frequency, not space, minimizes calculation load and memory
requirements. The input to the signal processor is the frequency difference between the transmitted
ramp (either up or down) and the delayed Doppler-shifted object return. Based on the return from a
single ramp, a frequency track file is generated. Subsequent radar actions repeat these
measurements and refine the track file of each object. As required, the frequency track files from
separate up and down ramps are compared, and range, range rate, and acceleration information
extracted.
Objects are also tracked in angle. The angular information is derived from the presence of the object
in different azimuth antenna beams. An object could appear in one or more adjacent beam locations.
Should objects appear in two adjacent beams, each with the same track history, it is assumed to be
one object located in the area between the beam centers, and the actual angular position is
calculated from the relative power levels of the radar echoes in the two beams. This, with the
information from the processor about the object's range, allows all objects to be located in polar
coordinates, with the radar at the origin.
2

NODS
Raytheon's NODS is a 24-GHz FM/CW radar that uses a linear 180-MHz 1-ms chirp. FM/CW radar
has the advantages of high sensitivity, relatively low transmitter power, and good range resolution.
An upchirp is used to induce a forward prediction time of 0.1 s. Baseband video is filtered,
digitized, fast Fourier transformed (FFTed), and thresholded for object detection. Range gates are
reprogrammable in software to alter the detection zone for different vehicle sizes and
configurations. Seven sequential beams arrayed in azimuth cover the FOV for the sensor. The
selection of seven was accomplished by a tradeoff between alert zone sharpness, sidelobe levels,
and antenna cost. The seven beams are formed by combining adjacent pairs from eight switched
beams generated by a Butler matrix feeding a 6 8 array of waveguide slot apertures. The bistatic
antenna assemblies (transmit and receive) provide feedthrough isolation and are fabricated in
multilayer low-temperature co-fired ceramic (LTCC). The NODS physical configuration consists of
a two-piece housing with a base and a cover, a top substrate containing the antenna and RF transmit
and receive circuits, a second substrate in the base containing the IF circuits, flex circuits to connect
the RF transmit and receive circuits to the IF circuits, and an electromagnetic interference (EMI)
shield to reduce the amount of stray energy radiated from the sensor.

NIGHT VISION
Night Vision is a thermal imaging system developed by the Raytheon Company that made its debut
as an option on Cadillac's 2000 DeVille model. The Night Vision system uses an uncooled thermal
camera mounted behind a car's front grill. Images from the camera are viewed with a heads up
display on the bottom of the driver's windshield. Night Vision is unaffected by ambient light and
provides an image that allows drivers to see beyond the glare of oncoming headlamps. During night
time driving, when more than one-half of all traffic fatalities occur, the Night Vision system uses
the heads up display to project real-time thermal images onto the lower part of the driver's
windshield. The projected image resembles a black-and-white photographic negativehotter
objects (such as humans or animals) appear white and cooler objects appear black. Night Vision
evolved from a militarized driver's vision enhancer that is standard equipment on the Bradley
fighting vehicle with over 1000 systems in service.

TRACKING AND FUSION


The use of both millimeter-wave radar and IR-based sensors in an integrated automotive application
takes advantage of the best properties of each. IR sensing of the region directly in front of the
vehicle provides the driver with a long-range visual image. The 77-GHz FLRS system accurately
provides high-resolution all-weather object information to the intelligent cruise control. The 24GHz NODS units provide programmable detection zones for side-object detection, backup aids, and
parking assistance.
The development of small low-cost vehicular sensors, as described above, is critical to the success
of intelligent highway systems. However, the signal processing, tracking algorithms, and reliability
of the integrated system will go a long way toward satisfying the consumer and ultimately
determining whether this multiple sensor system is readily accepted. Complex tracking, detection,
and low false-alarm-rate algorithms must seamlessly fuse data from each of the sensors.
Usable data needs to be extracted from the sensors based on specific detection criteria, with reduced
interference and high probability of detection. Tracking algorithms must initiate individual object
3

tracks, continuously maintain track on multiple objects, update the track file data, and intelligently
drop and reacquire tracks.
Scenario assessment algorithms need to include identification of the most important road hazard(s),
reconfiguring the operating mode of sensors with respect to the specific hazard type and upcoming
hazard predictions. This is done with a mixture of deterministic behavior and probabilistic behavior
algorithm designs, while maintaining maximum detection and minimum false alarms. The different
types of road hazards need different approaches.

CONCLUSION
As highway congestion and distractions continue to increase, automotive manufacturers and
consumers look toward technology to improve and enhance safety. The sensors described here,
individually or together, provide a new driver awareness of the environment surrounding the vehicle
and may lead to a safer highway. The less mature automotive forward-looking radar application
requires the development of industry standards for functionality, performance, and interfaces with
other intelligent vehicle highway system components (IHVS) before it can realize its full market
potential. The future of automotive radar may remain solely as a component of an enhanced cruise
control or blind-spot detection without continued collaboration between industry and NTHSA to
develop vehicular collision avoidance as a component of an overall IVHS. It is envisioned that, in
the IVHS application, the radar suite will be integrated with GPS, automated steering control and
braking, other on-board sensors, and numerous embedded roadway sensors to assist in the total
control of the vehicle.
: Microwave Theory and Techniques, IEEE Transactions on, Issue Date: Mar 2002, Written by: Russell, M.E.;
Drubin, C.A.; Marinilli, A.S.; Woodington, W.G.; Del Checcolo, M.J.

New words
Wind shear variation in wind velocity occurring along a direction at right angles to the winds
direction and tending to exert a turning force.;
Permeate spread throughout (something);
Vicinity the area near or surrounding a particular place;
Conceive form or devise (a plan or idea) in the mind;
Yaw twist or oscillate about a vertical axis;
Adjacent geometry (of a pair of angles) formed on the same side of a straight line when
intersected by another line;
Chirp a short, sharp, high-pitched sound;
Aperture An opening, hole, or gap;
Vehicular of, relating to, or for vehicles;
Congestion the state of being so crowded with traffic or people as to hinder or prevent freedom of
movement;
Solely not involving anyone or anything else; only;

Edgaras Reingardas RTE-3


Internal Combustion
The principle behind any reciprocating internal combustion engine: If you put a tiny amount of high-energy
fuel (like gasoline) in a small, enclosed space and ignite it, an incredible amount of energy is released in
the form of expanding gas. You can use that energy to propel a potato 500 feet. In this case, the energy
is translated into potato motion. You can also use it for more interesting purposes. For example, if you
can create a cycle that allows you to set off explosions like this hundreds of times per minute, and if you
can harness that energy in a useful way, what you have is the core of a car engine!
Almost all cars currently use what is called a four-stroke combustion cycle to convert gasoline into
motion. The four-stroke approach is also known as the Otto cycle, in honor of Nikolaus Otto, who
invented it in 1867. The four strokes are illustrated in Figure 1. They are:

Intake stroke

Compression stroke

Combustion stroke

Exhaust stroke
Figure 1
You can see in the figure that a device called a piston replaces the potato in the potato cannon. The
piston is connected to the crankshaft by a connecting rod. As the crankshaft revolves, it has the effect
of "resetting the cannon." Here's what happens as the engine goes through its cycle:

1. The piston starts at the top, the intake valve opens, and the piston moves down to let the engine take in a
cylinder-full of air and gasoline. This is the intake stroke. Only the tiniest drop of gasoline needs to be
mixed into the air for this to work. (Part 1 of the figure)
2. Then the piston moves back up to compress this fuel/air mixture. Compression makes the explosion
more powerful. (Part 2 of the figure)
3. When the piston reaches the top of its stroke, the spark plug emits a spark to ignite the gasoline. The
gasoline charge in the cylinder explodes, driving the piston down. (Part 3 of the figure)
4. Once the piston hits the bottom of its stroke, the exhaust valve opens and the exhaust leaves the
cylinder to go out the tailpipe. (Part 4 of the figure)
Now the engine is ready for the next cycle, so it intakes another charge of air and gas.
Notice that the motion that comes out of an internal combustion engine is rotational, while the motion
produced by a potato cannon is linear (straight line). In an engine the linear motion of the pistons is
converted into rotational motion by the crankshaft. The rotational motion is nice because we plan to turn
(rotate) the car's wheels with it anyway.
Now let's look at all the parts that work together to make this happen, starting with the cylinders.

Basic Engine Parts


The core of the engine is the cylinder, with the piston moving up and down inside the cylinder. The engine
described above has one cylinder. That is typical of most lawn mowers, but most cars have more than
one cylinder (four, six and eight cylinders are common). In a multi-cylinder engine, the cylinders usually
are arranged in one of three ways: inline, V or flat (also known as horizontally opposed or boxer), as
shown in the following figures.
Different configurations have different advantages and disadvantages in terms of smoothness,
manufacturing cost and shape characteristics. These advantages and disadvantages make them more
suitable for certain vehicles.

Figure 3. V - The cylinders are arranged in two banks set at an angle to one another.

Figure 4. Flat - The cylinders are arranged in two banks on opposite sides of the engine.

Let's look at some key engine parts in more detail.


Spark plug

The spark plug supplies the spark that ignites the air/fuel mixture so that combustion can occur. The
spark must happen at just the right moment for things to work properly.
Valves
The intake and exhaust valves open at the proper time to let in air and fuel and to let out exhaust. Note
that both valves are closed during compression and combustion so that the combustion chamber is
sealed.
Piston
A piston is a cylindrical piece of metal that moves up and down inside the cylinder.
Piston rings
Piston rings provide a sliding seal between the outer edge of the piston and the inner edge of the cylinder.
The rings serve two purposes:

They prevent the fuel/air mixture and exhaust in the combustion chamber from leaking into the sump
during compression and combustion.

They keep oil in the sump from leaking into the combustion area, where it would be burned and lost.
Most cars that "burn oil" and have to have a quart added every 1,000 miles are burning it because the
engine is old and the rings no longer seal things properly.
Connecting rod
The connecting rod connects the piston to the crankshaft. It can rotate at both ends so that its angle can
change as the piston moves and the crankshaft rotates.
Crankshaft
The crankshaft turns the piston's up and down motion into circular motion just like a crank on a jack-inthe-box does.
Sump
The sump surrounds the crankshaft. It contains some amount of oil, which collects in the bottom of the
sump (the oil pan).

V-TEC
It turns out that there is significant relationship between the way the lobes are ground on the camshaft
and the way the engine performs in different rpm (rotations per minute) ranges. To understand why this is
the case, imagine that we are running an engine extremely slowly -- at just 10 or 20 rpm, so it takes the
piston seconds to complete a cycle. It would be impossible to actually run a normal engine this slowly, but
imagine that we could. We would want to grind the camshaft so that, just as the piston starts moving
downward in the intake stroke, the intake valve would open. The intake valve would close right as the
piston bottoms out. Then the exhaust valve would open right as the piston bottoms out at the end of the
combustion stroke and would close as the piston completes the exhaust stroke. That would work great for
the engine as long as it ran at this very slow speed.
When you increase the rpm, however, this configuration for the camshaft does not work well. If the engine
is running at 4,000 rpm, the valves are opening and closing 2,000 times every minute, or thirty to fourty
times every second. When the intake valve opens right at the top of the intake stroke, it turns out that the
piston has a lot of trouble getting the air moving into the cylinder in the short time available (a fraction of a

second). Therefore, at higher rpm ranges you want the intake valve to open prior to the intake stroke -actually back in the exhaust stroke -- so that by the time the piston starts moving downward in the intake
stroke, the valve is open and air moves freely into the cylinder during the entire intake stroke. This is
something of a simplification, but you get the idea. For maximum engine performance at low engine
speeds, the valves need to open and close differently than they do at higher engine speeds. If you put in
a good low-speed camshaft, it hurts the engine's performance at high speeds, and if you put in a good
high-speed camshaft it hurts the engine's performance at low speeds (and in extreme cases can make it
very hard to start the engine!).
VTEC (which stands for Variable Valve Timing and Lift Electronic Control) is an electronic and
mechanical system in some Honda engines that allows the engine to effectively have multiple camshafts.
As the engine moves into different rpm ranges, the engine's computer can activate alternate lobes on the
camshaft and change the cam's timing. In this way, the engine gets the best features of low-speed and
high-speed camshafts in the same engine. Several engine manufacturers are experimenting with systems
that would allow infinite variability in valve timing. For example, imagine that each valve had
a solenoid on it that could open and close the valve under computer control rather than relying on a
camshaft. With this type of system, you would get maximum engine performance at every rpm range.
Something to look forward to in the future...

1.Reciprocating (Of a part of a machine) move backwards and forwards in a straight line:
reciprocating blade
2. Combustion. The process of burning something: the combustion of fossil fuels
3. Crankshaft. A shaft driven by a crank.
4. Stroke. An act of hitting or striking someone or something; a blow: he received three strokes
of the cane
5. Cylindrical. A solid geometrical figure with straight parallel sides and a circular or oval cross
section.
6. Lobe. A roundish and flattish projecting or hanging part of something, typically one of two or
more such parts divided by a fissure: the leaf has a broad central lobe he pinched the lobe of
his right ear
7. Prior. Existing or coming before in time, order, or importance: he has a
prior engagement this evening
8. Intake. A place or structure through which something is taken in, e.g. water into a channel or
pipe from a river, fuel or air into an engine, etc.
9. Variable. Not consistent or having a fixed pattern; liable to change: the quality
of hospital food is highly variable
10. Manufacturer. A person or company that makes goods for sale: the manufacturers supply
the goods to the distribution center
11. Infinite. Limitless or endless in space, extent, or size; impossible to measure or calculate: the
infinite mercy of God the infinite number of stars in the universe
12. Solenoid. A cylindrical coil of wire acting as a magnet when carrying electric current.

Ginvydas Ruinskas RTE-3

How Four-Wheel Drive Works


There are almost as many different types of four-wheel-drive systems as there are four-wheel-drive
vehicles. It seems that every manufacturer has several different schemes for providing power to all
of the wheels. The language used by the different carmakers can sometimes be a little confusing, so
before we get started explaining how they work, let's clear up some terminology:

Four-wheel drive - Usually, when carmakers say that a car has four-wheel drive, they are
referring to a part-time system. For reasons we'll explore later in this article, these systems
are meant only for use in low-traction conditions, such as off-road or on snow or ice.
All-wheel drive - These systems are sometimes called full-time four-wheel drive. Allwheel-drive systems are designed to function on all types of surfaces, both on- and off-road,
and most of them cannot be switched off.

Part-time and full-time four-wheel-drive systems can be evaluated using the same criteria. The best
system will send exactly the right amount of torque to each wheel, which is the maximum torque
that won't cause that tire to slip.
In this article, we'll explain the fundamentals of four-wheel drive, starting with some background on
traction, and look at the components that make up a four-wheel-drive system. Then we'll take a look
at a couple of different systems, including the one found on the Hummer, manufactured for GM by
AM General.
We need to know a little about torque, traction and wheel slip before we can understand the
different four-wheel-drive systems found on cars.

Torque, Traction and Wheel Slip


Torque is the twisting force that the engine produces. The torque from the engine is what moves
your car. The various gears in the transmission and differential multiply the torque and split it up
between the wheels. More torque can be sent to the wheels in first gear than in fifth gear because
first gear has a larger gear-ratio by which to multiply the torque.
The bar graph below indicates the amount of torque that the engine is producing. The mark on the
graph indicates the amount of torque that will cause wheel slip. The car that makes a good start
never exceeds this torque, so the tires don't slip; the car that makes a bad start exceeds this torque,
so the tires slip. As soon as they start to slip, the torque drops down to almost zero.
The interesting thing about torque is that in low-traction situations, the maximum amount of torque
that can be created is determined by the amount of traction, not by the engine. Even if you have a
NASCAR engine in your car, if the tires won't stick to the ground there is simply no way to harness
that power.
For the sake of this article, we'll define traction as the maximum amount of force the tire can apply
against the ground (or that the ground can apply against the tire -- they're the same thing). These are
the factors that affect traction:

The weight on the tire -- The more weight on a tire, the more traction it has. Weight can shift as a
car drives. For instance, when a car makes a turn, weight shifts to the outside wheels. When it
accelerates, weight shifts to the rear wheels. (See How Brakes Work for more details.)
The coefficient of friction -- This factor relates the amount of friction force between two surfaces
to the force holding the two surfaces together. In our case, it relates the amount of traction between
the tires and the road to the weight resting on each tire. The coefficient of friction is mostly a
function of the kind of tires on the vehicle and the type of surface the vehicle is driving on. For
instance, a NASCAR tire has a very high coefficient of friction when it is driving on a dry, concrete
track. That is one of the reasons why NASCAR race cars can corner at such high speeds. The
coefficient of friction for that same tire in mud would be almost zero. By contrast, huge, knobby,
off-road tires wouldn't have as high a coefficient of friction on a dry track, but in the mud, their
coefficient of friction is extremely high.
Wheel slip -- There are two kinds of contact that tires can make with the road: static and dynamic.

static contact -- The tire and the road (or ground) are not slipping relative to each other. The
coefficient of friction for static contact is higher than for dynamic contact, so static contact
provides better traction.
dynamic contact -- The tire is slipping relative to the road. The coefficient of friction for
dynamic contact is lower, so you have less traction.

Quite simply, wheel slip occurs when the force applied to a tire exceeds the traction available to that
tire. Force is applied to the tire in two ways:

Longitudinally -- Longitudinal force comes from the torque applied to the tire by the
engine or by the brakes. It tends to either accelerate or decelerate the car.
Laterally -- Lateral force is created when the car drives around a curve. It takes force to
make a car change direction -- ultimately, the tires and the ground provide lateral force.

Let's say you have a fairly powerful rear-wheel-drive car, and you are driving around a curve on a
wet road. Your tires have plenty of traction to apply the lateral force needed to keep your car on the
road as it goes around the curve. Let's say you floor the gas pedal in the middle of the turn (don't
do this!) -- your engine sends a lot more torque to the wheels, producing a large amount of
longitudinal force. If you add the longitudinal force (produced by the engine) and the lateral force
created in the turn, and the sum exceeds the traction available, you just created wheel slip.
Most people don't even come close to exceeding the available traction on dry pavement, or even on
flat, wet pavement. Four-wheel and all-wheel-drive systems are most useful in low-traction
situations, such as in snow and on slippery hills.
The benefit of four-wheel drive is easy to understand: If you are driving four wheels instead of two,
you've got the potential to double the amount of longitudinal force (the force that makes you go)
that the tires apply to the ground.
This can help in a variety of situations. For instance:

In snow -- It takes a lot of force to push a car through the snow. The amount of force
available is limited by the available traction. Most two-wheel-drive cars can't move if there

is more than a few inches of snow on the road, because in the snow, each tire has only a
small amount of traction. A four-wheel-drive car can utilize the traction of all four tires.
Off road -- In off-road conditions, it is fairly common for at least one set of tires to be in a
low-traction situation, such as when crossing a stream or mud puddle. With four-wheel
drive, the other set of tires still has traction, so they can pull you out.
Climbing slippery hills -- This task requires a lot of traction. A four-wheel-drive car can
utilize the traction of all four tires to pull the car up the hill.

There are also some situations in which four-wheel drive provides no advantage over two-wheel
drive. Most notably, four-wheel-drive systems won't help you stop on slippery surfaces. It's all up to
the brakes and the anti-lock braking system (ABS).
Now let's take a look at the parts that make up a four-wheel-drive system.

Components of a Four-wheel-drive System


The main parts of any four-wheel-drive system are the two differentials (front and rear) and the
transfer case. In addition, part-time systems have locking hubs, and both types of systems may have
advanced electronics that help them make even better use of the available traction.
Differentials A car has two differentials, one located between the two front wheels and one
between the two rear wheels. They send the torque from the driveshaft or transmission to the drive
wheels. They also allow the left and right wheels to spin at different speeds when you go around a
turn.
When you go around a turn, the inside wheels follow a different path than the outside wheels, and
the front wheels follow a different path than the rear wheels, so each of the wheels is spinning at a
different speed. The differentials enable the speed difference between the inside and outside wheels.
(In all-wheel drive, the speed difference between the front and rear wheels is handled by the transfer
case -- we'll discuss this next.)
There are several different kinds of differentials used in cars and trucks. The types of differentials
used can have a significant effect on how well the vehicle utilizes available traction. See How
Differentials Work for more details.

A typical part time four-wheel drive transfer case: The planetary gear reduction can be engaged to
provide the low-range gearing.
Transfer Case

This is the device that splits the power between the front and rear axles on a four-wheel-drive car.
Back to our corner-turning example: While the differentials handle the speed difference between the
inside and outside wheels, the transfer case in an all-wheel-drive system contains a device that
allows for a speed difference between the front and rear wheels. This could be a viscous coupling,
center differential or other type of gearset. These devices allow an all-wheel-drive system to
function properly on any surface.
The transfer case on a part-time four-wheel-drive system locks the front-axle driveshaft to the rearaxle driveshaft, so the wheels are forced to spin at the same speed. This requires that the tires slip
when the car goes around a turn. Part-time systems like this should only be used in low -traction
situations in which it is relatively easy for the tires to slip. On dry concrete, it is not easy for the
tires to slip, so the four-wheel drive should be disengaged in order to avoid jerky turns and extra
wear on the tires and drivetrain.
Some transfer cases, more commonly those in part-time systems, also contain an additional set of
gears that give the vehicle a low range. This extra gear ratio gives the vehicle extra torque and a
super-slow output speed. In first gear in low range, the vehicle might have a top speed of about 5
mph (8 kph), but incredible torque is produced at the wheels. This allows drivers to slowly and
smoothly creep up very steep hills.
Locking Hubs

Each wheel in a car is bolted to a hub. Part-time four-wheel-drive trucks usually have locking hubs
on the front wheels. When four-wheel drive is not engaged, the locking hubs are used to disconnect
the front wheels from the front differential, half-shafts (the shafts that connect the differential to the
hub) and driveshaft. This allows the differential, half-shafts and driveshaft to stop spinning when
the car is in two-wheel drive, saving wear and tear on those parts and improving fuel-economy.
Manual locking hubs used to be quite common. To engage four-wheel drive, the driver actually had
to get out of the truck and turn a knob on the front wheels until the hubs locked. Newer systems
have automatic locking hubs that engage when the driver switches into four-wheel drive. This type
of system can usually be engaged while the vehicle is moving.
Whether manual or automatic, these systems generally use a sliding collar that locks the front halfshafts to the hub.
Advanced Electronics
On many modern four-wheel and all-wheel-drive vehicles, advanced electronics play a key role. Some cars
use the ABS system to selectively apply the brakes to wheels that start to skid -- this is called brake-traction
control.

Others have sophisticated, electronically-controlled clutches that can better control the torque
transfer between wheels. We'll take a look at one such advanced system later in the article.

First, let's see how the most basic part-time four-wheel-drive system works.

Four-wheel Drive Differential


The type of part-time system typically found on four-wheel-drive pickups and older SUVs works
like this: The vehicle is usually rear-wheel drive. The transmission hooks up directly to a transfer
case. From there, one driveshaft turns the front axle, and another turns the rear axle.
When four-wheel drive is engaged, the transfer case locks the front driveshaft to the rear driveshaft,
so each axle receives half of the torque coming from the engine. At the same time, the front hubs
lock.
The front and rear axles each have an open differential. Although this system provides much better
traction than a two-wheel-drive vehicle, it has two main drawbacks. We've already discussed one of
them: It cannot be used on-road because of the locked transfer case.
The second problem comes from the type of differentials used: An open differential splits the torque
evenly between each of the two wheels it is connected to (see How Differentials Work for more
details). If one of those two wheels comes off the ground, or is on a very slippery surface, the torque
applied to that wheel drops to zero. Because the torque is split evenly, this means that the other
wheel also receives zero torque. So even if the other wheel has plenty of traction, no torque is
transferred to it. The animation below shows how a system like this reacts under various conditions.
Animation of a basic system encountering various combinations of terrain. This vehicle gets stuck
when two of its wheels are on the ice.
Previously, we said that the best four-wheel-drive system will send exactly the right amount of
torque to each wheel, the right amount being the maximum torque that won't cause that tire to slip.
This system rates fairly poorly by that criterion. It sends to both wheels the amount of torque that
won't cause the tire with the least traction to slip.
There are some ways to make improvements to a system like this. Replacing the open differential
with a limited-slip rear differential is one of the most common ones -- this makes sure that both rear
wheels are able to apply some torque no matter what. Another option is a locking differential, which
locks the rear wheels together to ensure that each one has access to all of the torque coming into the
axle, even if one wheel is off the ground -- this improves performance in off-road conditions.

Unknown words explanations


Harness - A set of straps and fittings by which a horse or other draft animal is fastened to a cart,
plow, etc., and is controlled by its driver. (Pakinktai)
Friction - The resistance that one surface or object encounters when moving over another. (Trintis)
Knob - A rounded lump or ball, especially at the end or on the surface of something. (Rankena)
Traction - The action of drawing or pulling a thing over a surface, especially a road or track.
(Trauka)
Longitudinal - Running lengthwise rather than across. (Iilginis)
Lateral - Of, at, toward, or from the side or sides. (oninis)
Pavement - Any paved area or surface. (Grindinys)
Locking hubs - also known as free wheeling hubs are an accessory fitted to many four-wheel
drive vehicles, allowing the front wheels to be manually disconnected from the front half shafts.
(Fiksavimo mazgai)
Viscous coupling - is a mechanical device which transfers torque and rotation by the medium of a
viscous fluid. (Klampi mova)
Sophisticated - (Of a machine, system, or technique) developed to a high degree of complexity.
(Sudtingas)
Terrain - A stretch of land, especially with regard to its physical features. (Vietov)
Evaluate - Form an idea of the amount, number, or value of; assess. (vertinti)

Edvinas Varpuianskis RTE-3gr. 1st Homework


Conti wants to prioritise safety in the connected car
Anonymous data could reduce accidents
Continental's chief executive Dr Elmar Degenart wants data from networked vehicles to be
used to improve safety, not as a way to harvest data. He said that information taken from
vehicles anonymously would be sufficient to reduce road accidents.
The supplier is developing technologies in collaboration with firms such as IBM and Cisco
that connect vehicles to one another and the infrastructure using high-speed cellular
networks. The data transmitted can warn other drivers of obstacles in the road, allowing
them to safely avoid them, or the movements of other vehicles allowing vehicles to brake if
necessarily.
Degenart said: If we continue to network cars and make them part of the internet, people
and their safety must be the focus of attention and not merely their data. We are convinced
that safety and accident prevention are compatible with data protection and security.
As demand for connectivity grows, driven by developments in the consumer electronics
industry, so too does the need to protect the vehicle from data mining whether through
legitimate means or from malicious attacks on vehicles from software viruses.
Degenart's view is that safety systems don't require specific details about the driver to be
successful and Conti's research has shown that many networked services don't require the
identity of the motorist be known. Anonymous information such as position, time, and
incident is sufficient to enable vehicles to inform one another about dangers on the road.
And any personal data that may be required should be clearly defined and approved.
"Drivers must always be able to choose whether they want to pass on data, and, if so, what
data. The recipe for success in the case of the networked vehicle is confidence built on
driver dialog and procedural transparency," said Degenhart.
It's a view echoed at Volkswagen Group. The OEM's chairman, Dr Martin Winterkorn, has
warned against the risk of data protection infringements and wants the industry to work
together to ensure that consumer safety and security are not compromised.
VW makes the connected car more intelligent
Research laboratory grows functionality

Volkswagen is developing more intuitive human-machine interfaces and infotainment


functions.
As vehicle connectivity grows, so too does the need to adapt systems to make them more
beneficial for vehicle occupants, rather than a simple distraction. VWs electronics research
laboratory, (ERL), in North America is developing intuitive forms of communications
between driver, car and the external environment.
Researchers are focusing on three elements that they see as key to improving the connected
car the car, mobile devices and the cloud.
VW's ERL deputy director, Chuhee Lee said: The end-to-end user experience is the key
success factor to providing more connected intelligence services to our customers. This
means being in touch with customers in and out of the car using cloud and connected
devices, and ultimately designing a vehicle capable of learning, predicting, and adapting to
drivers needs and wishes.
As demand grows, more of VW's vehicles will offer technologies that integrate smart phone
applications and allow users to connect navigation with real-time map images and updates.
The navigation systems would also mimic the expertise of a driving companion, familiar
with both the driver and destination.
ERL researchers are also developing vehicle intelligence technologies for Audi that enable
cars to identify the preferences and individual needs of drivers. It could allow vehicles to
connect to the cloud and find the drivers location, how it affects the drivers mobility, and
will provide assistance to the drivers specific needs, such as routing the driver to an
available parking space.
Car-to-car communications give drivers time to stop
Ford develops early warning system

Ford has developed and tested an advanced driver assistance system which gives advanced
warning when vehicles on the road ahead are braking hard.
Many rear-end collisions are caused when drivers brake too late, too little, or not at all.
Giving them more time to react could reduce the number of accidents or mitigate the
severity of injuries. The technology is one of several conceived for the Sim TD intelligent
transport project in Germany.
Ford's chief technical officer, Paul Mascarenas, said: Car-to-car and car-to-infrastructure
communications represent one of the next major advancements in vehicle safety. Ford is
committed to further real-world testing here and around the world with the goal of
implementation in the foreseeable future.
The system triggers a warning signal if a vehicle ahead initiates emergency braking. The
signal is transmitted to vehicles nearby to give advanced warning of the braking event by
illuminating a warning light on the dashboard. The function is particularly effective if the
vehicle which is braking is obscured by the traffic in front, or if it is out of sight altogether
just around the next bend, for instance.
The system was developed by Ford's European research centre in Aachen. The four-year,
53million SimTD project took place on public roads in and around Frankfurt and was a
collaboration between OEMs, Tier Ones, government, academia and infrastructure and
utilities firms.

All information copy from http://ae-plus.com/technology (automotive engineer


magazine)

sufficient pakankamas, utenkamas;


demand (pa)reikalavimas;
malicious piktas, pagieingas
echo aidas, skardus atgarsis, atbalsis
provide ap(si)rpinti; parpinti; tiekti
reduce (su)mainti; (su)silpninti;
trigger tech. spragtukas; strekt kar. gaidukas;

Aurimas Vilkas RTE-3

Audi A4 and the concept of e-quattro

These are our latest spy shots of a prototype for the next-generation B9 Audi
A4 sedan, which is expected to be revealed this summer, as a 2016 model. It
will be followed about a year later by its Avant wagon and Allroad variants, and
for the first time offer some alternative drivetrains. Audi was originally due to
launch its new A4 at the end of 2014 but a late change in its design meant the
vehicles release date needed to be pushed back.
The 2016 A4 will be Audis second model to ride on the next-generation version
of the MLB (modular longitudinal) platform, which will eventually underpin most
of the automakers models from the A4 up. Previewed in Audis Sport quattro
concept, the latest MLB platform, nicknamed the MLB Evo, uses a combination
of high-strength steel, aluminum and even some composites to help get weight
down. Look for weight savings of around 200 pounds compared to the current
A4. The first vehicle to adopt the new platform is the 2016 Q7.
Front-wheel drive will remain standard for the 2016 A4 while quattro all-wheel
drive will continue to be offered as an option. We may also see four-wheel
steering on the new A4. The front suspension should be comprised of five links
per wheel, while the rear suspension should be based on the self-tracking
trapezoidal link principle coveted by the brand with the four rings.
One of the key benefits of the latest MLB Evo platform is that it has been
designed to be compatible with alternative drivetrains. We should get a plug-in
hybrid version of the 2016 A4 while overseas Audi will likely offer a natural gas
version and maybe even an all-electric version at some point in the model's life
cycle.
The base engine, at least here in the U.S., should continue to be a 2.0-liter TSI,
with a 3.0-liter TSI offered for buyers seeking greater performance. Also in the
lineup should be a frugal 2.0-liter TDI diesel option as well as a highperformance RS 4 variant. It is not clear at this point if the latter will be available
in both sedan and wagon forms (the current RS 4 is only available as an Avant
wagon). Transmissions for the new A4 are likely to include six-speed manual
and seven-speed dual-clutch units.

Audis new A4, due out later this year, will be the automakers first road-going
vehicle to feature a hybrid-based all-wheel-drive system called the e-quattro.
The e-quattro system has been used by Audi in its Le Mans prototype racers for
the past couple of years, where its known as e-tron quattro, and now its coming
to the production world.
It is essentially a through-the-road hybrid all-wheel-drive system where an
internal combustion engine drives one axle and an electric drive system is used
for the other axle, thus creating an all-wheel-drive system without the need for
any connecting driveshafts. Several automakers already offer such technology
including BMW, Peugeot, Porsche and Volvo.
According to CAR, the e-quattro system will appear in a high-end A4 variant
initially. Two electric motors will feature, one integrated with the engine and used
to spin the front wheels, and another used to spin the rear wheels. This way the
car will still have all-wheel drive even in all-electric mode. We saw a preview of
this with 2011's A5 e-tron quattro concept car.

2011 Audi A5 e-tron quattro hybrid prototype

As the three power sources will be able to work independently, we should see
multiple driving modes including engine only (front-wheel drive), electric only
(two-wheel drive or all-wheel drive) and hybrid (all-wheel drive). The engine is
said to deliver 292 horsepower, with the electric motor at the front delivering 54
hp and the motor at the rear delivering 116 hp. Peak output of the system is said
to be 408 hp. Less powerful versions of the e-quattro drivetrain will also be
developed.
A lithium-ion battery will be used to power the electric motors. Owners will be
able to charge the battery at home using a household power outlet, or on the
road using regenerative forces. For charging at home, Audi is said to also be
planning a wireless charging system that relies on inductive coils. The
technology will be offered initially with a new TT-based crossover dubbed the
TTQ, which was previewed by 2014's TT Offroad concept.
Key words :
1.
2.
3.
4.
5.
6.
7.
8.
9.

Frugal - Simple and plain and costing little


Underpin to give support or substance.
Covet to wish for ,especially eagerly.
Trapezoidal link trapezoidal shape link.
e-quattro hybrid bassed all-wheel drive system.
Driveshafts- a shaft for importing torque from a power source to machinery.
Regenerative forces forces like breaking is regenerated as electric power.
Inductive coils type of electrical transformer to produce high-voltage.
Drivetrain - The system in a motor vehicle which connects the transmission to the drive
axles.

10. Clutch - A mechanism for connecting and disconnecting an engine and the transmission
system in a vehicle.

Arvydas Virbalas RTE-3

Innovative applications of cars connectivity network way to intelligent vehicle


Abstract: The presented article focuses on characteristic of possibilities to use of ICT tools in
automotive traffic. There are specified selected potentialities for a network connected to automotive
integration in near future. There is also considerable innovation in the field of Internet-enabled incar systems. In this contribution we want illustrating affects of Internet networking in automobiles
by examples of applications. The goal is to present conceptual model of vehicle connected to
external interfaces. Subject of article covered the tendencies in the development of the specific
application in automotive sector. Objectives is an increased public perception and customer
acceptance of cars network systems which is suitable for multiple application domains external
connectivity, networking, security, diagnosis, integrated safety management etc.
Introduction
The automotive industry is associated with innovations progress by adopting new technologies and
capability of response to constantly emerging challenges. Innovations are key differentiating
features in the marketplace of automotive marks. This paper analyses the driving forces for R&D in
the area of cars networking and conceptual aspects of automotive connectivity applications.
Advances in Information Communication Technologies allow implementing this idea in the near
future.
Innovative automotive applications based on wireless
The automotive industry is one of the most innovative sectors and quality leadership. Innovations
must prove their worth in practical applications, when solution is integrated into existing systems
and infrastructures. Future intelligent systems installed in cars will be able to take control from the
driver and they autonomously drive the car. For example in research paper (Sol, E.T. et al., 2008) is
specified the five generations of innovative intelligent systems in vehicle:
a) First generation - the exchange of information between the vehicle and road network (navigation
systems, speed alert, reservation and payment systems etc.).
b) Second generation - comprehensive exchange of information; there are systems that support both
the driver and traffic manager: vehicles transmit and receive information about hazards and
congestion via vehicle-to-vehicle communication.
c) The third generation is equipped with systems for communication between the cars (C2C) and
also between the vehicle and transport infrastructure (C2I).
d) The fourth generation is focused on the urban environment; car will have e.g. the option of
automatic control at low speeds, GPS to be more precise than 1 meter.
e) The fifth generation allows to automatic control at high speeds through the network of electronic
systems.
Advanced automotive functions are increasingly dependent on software and electronics systems.
Consumers increasing web sophistication is driving the growing interest in this.
The sectorial publication (Siemens, 2008) declares that software and electronics now account for
70% to 90% of all automotive innovations. Additionally challenges for innovation are:
The rapidly evolving field of entertainment electronics is having a significant impact on
innovation cycles in automotive electronics.
In-car Internet systems: Secure, reliable communication links to car.
Eco telematics: Real-time provision of navigation data and information on the car-external
environment such as gradients, curves and road type for the use of vehicle control functions.

[Type here]

Telematics: Development and operation of special telematic devices and applications


including call-center services for customer support (remote diagnostics and emergency).
Car2X communication: Information interchange between vehicles on the move or with
traffic signs or displays (for example, for providing warnings about traffic jams).
These trends give rise to promising attractive market segment as declare published scenarios
(KPMG, 2012) for future development in automotive.

Best in class car producers (OEM original equipment manufacturer) and IT companies are
realizing the development in cooperation to use car potential as a gateway to the internet, as access
point to a connected world but not just in regard to entertainment. The new technology intelligent
plug-in solution and interfaces enhances safety by helping vehicles communicate with external
environment (e.g. satellite navigation: car-to-car, traffic systems: car-to-infrastructure, service
station: car-to-OEM etc.).
To keep cars equipment with the latest technology throughout the life cycle of vehicle, a modular
approach can be adopted, enabling new connectivity developments to be continually integrated
within the existing chassis.
Connection possibilities of vehicle interfaces
Automobile manufacturers are incorporating wireless communications into both economy and
luxury vehicle models as safety regulations and consumer expectations of anytime anywhere
connectivity increase demand for vehicle assistance services, location-aware applications, and
infotainment. To compete successfully, manufacturers not only need reliable wireless connectivity
to cellular networks but also wireless technology that minimizes total cost of ownership, from
production to customer service. Car manufacturers need a quality driven technology partners they
can count on from concept to end-of-life.
Standard wireless automotive applications in car outfit are e.g. (AUDI, 2012):
eCall
Driver / Roadside assistance
Stolen vehicle tracking and recovery
Navigation
Remote vehicle immobilization / Door lock controls
Remote diagnostics
In-vehicle Internet and entertainment.
The vehicle of future will be a communications wonder. As another node on the Internet, it will
connect with other vehicles (V2V connectivity), the transportation infrastructure (V2I) and to
homes, businesses and other sources (V2x).

[Type here]

Cooperative vehicle-infrastructure systems are


based on continuous communication and network
connection management for mobile local wireless
LAN and infra-red, and wide-area cellular
communication (T-Systems, 2008).
The schematic model of networking possibility in
car is show at fig. 2.
The connected vehicle will enhance the driving
experience in three specific areas:
Safety: Connectivity will give the driver access to
extensive information about congestion, accidents, road conditions, work zones, weather changes
and hazards. It will enable vehicles to communicate with others in proximity, warning of such
things as unsafe lane encroachment or impending collision.
Driver assistance: The connected vehicle will be able to optimize routes based on fuel economy,
real-time changes in traffic conditions and minimal tolling. Consumers can expect their vehicles to
offer limited self-driving capabilities, such as autonomous parking, depending upon rate of adoption
and regional regulatory acceptance. The vehicle will become an extension of lifestyles with
entertainment solutions (streaming audio, video and communications) that allow seamless transition
between mobility, office and home.
Service: The connected vehicle will be able to use real-time remote diagnostics and prognostics to
assess operating conditions and affect some degree of self-repair. Software and other service
patches to electronic systems will be automatically delivered to the vehicle, keeping it updated with
little consumer involvement.
IT quality and reliability as well as the global availability of IT applications increase. But the limit
of global establishment in vehicles is permanent real-time connectivity.
Challenges of vehicle connecting functionality development
Automotive applicable solutions are designed by engineers experienced in both parties - the
automotive and wireless industries, with in-depth knowledge of the markets challenges, and can
ensure customer satisfaction and low maintenance costs with proven wireless technologies designed
for easy integration.
Next challenges in field of connecting vehicles to wireless services in automotive industry are
explain and summarized in table 1.

[Type here]

The aim is to develop a reliable and secure exchange of information through standardization of
open and semi-open application platforms.
Main requirements to specialized suppliers that developed of those systems for automotive are
(Freescale Semiconductor, 2006):
High quality, certified facility;
Reduce cost with compact modules designed to support high volume production;
Ensure durability that lasts the lifetime of the vehicle with ruggedized designs;

[Type here]

Simplify integration with programmable features and an open, comprehensive development


environment;
Guaranteed connectivity to major wireless network operators;
Reduce recall campaigns, speed up deployment, and allow over-the-air upgrades with
powerful device and service management capabilities.
R&D engineers and specialists have experience in both the automotive and telecommunications
sectors and understand that wireless functionality involves more than just adding a modem to the
vehicle.
The management of new systems in terms of cars networking involves partnerships and
collaborations with various interested players: telecom operators, software publishers, digital
services providers, manufacturers and suppliers in automotive etc.
Model of cars networking
The vehicles of the near future will be intelligent and new technologies will provide for greater
assistance in navigation, enhanced driver information about the vehicle, its environment and vehicle
connectivity. Connectivity and lifestyle trends will change the way cars are used. This experience
will be a key differentiator in attracting consumers, especially in the areas of driver assistance,
safety and service.
Key systems integrated (Sierra Wireless, 2010):
Dynamic route guidance and navigation;
Motorist information on incidents, special events, weather and work zones;
Data downloads (entertainment, media, home network, personal preferences);
Recovery of stolen vehicles;
Electronic payments including toll, drive-through, parking, road pricing service;
Remote vehicle diagnostics; Remote vehicle prognostics and self healing;
Transfer of vehicle data based on warranty;
Customer relations management including vehicle use profiles and dealer use data;
Driving-based behavior service / scheduling / alerts / notification.
Solutions of wireless
connectivity by
intelligent embedded
modules are presented
at fig. 3. The
collected data is put
together as a unified
environment model
that is then
interpreted by the
computer.

[Type here]

Model inclusive networking possibilities:


Car-to-car: increased safety as vehicles can communicate with each other and pass warning
on dangerous situations such as wet roads;

Car-to-OEM / services: technical problems could be diagnosed and even repaired remotely
(e.g. for software updates);
Car-to-enterprise: offering new business opportunities to virtually all existing and future
automotive players, from gas station or car park operators to new web services;
Car-to-x-connectivity: communication is possible with any internet-capable device;
Car-to-infrastructure: traffic jams and red lights could be identified before they are reached.

Sensors, software and wireless communications will enable the vehicle to detect road conditions,
recognize other vehicles and pedestrians near its space and sense environmental changes. The
vehicle will then have the capability to either self correct or communicate information back to the
driver. The car will be in permanent dialogue with other vehicles and the traffic infrastructure; the
car will send corresponding warnings directly to other road users who are potentially at risk.
Information exchanged in this way will help to avoid traffic jams, prevent accidents and find a
parking space at any time. All parking spaces will notify the control centre of whether they are
occupied or free. Connectivity will allow vehicles to respond to developing traffic situations, find
alternate routes and anticipate impending collisions. Connectivity will also allow sensors in the
infrastructure to regulate traffic according to conditions. Emergency vehicles may command the
infrastructure to stop or move all traffic in its path - cars may be stopped or moved to avoid an
intersection violation. Telematics will enable the vehicle to diagnose operating problems and self
heal. OEMs and dealers will be able to offer more comprehensive customer relations management
by maintaining, with consumer agreement, vehicle usage data and consumer preference profiles
(Volkswagen, 2009).
Complex solution will provide domain-independent services that can be customized to the needs of
a particular application domain addressed to the automotive.
Main advantages, which are preferred for automotive customers, can be summarized on the part of
(AUDI, 2012):
Easy integration of wireless and GPS:
- Simple-to-deploy, programmable mobile cellular connectivity for 2G up to the latest
3G air interface support;
- Modules with reliable, sensitive GPS reception.
Secure, reliable data transmission:
- Mission-critical reliability with persistent network connections that reconnect
automatically;
- Operating system provides advanced security features.
Cost-effective, high-volume components:
- Designed for streamlined product development and easy upgrading, for high-volume
automotive production requirements.
Rugged, high-quality, long life components:
- Ruggedized designs comply with environmental requirements of the automotive
market (vibration, extreme temperature );
- Modules feature low power consumption and standby low power mode to prevent
draining battery power when vehicles are immobile for long periods.

[Type here]

The extent, to which these capabilities


will be both utilized and effective, will
depend upon several issues, including
adoption of industry-wide standards,
technology capability and consumer
acceptance (see fig. 4). These factors for
adopting the connectivity of vehicle in
near future are summarized e.g. in IBM
analytical paper (Rishi, S. et al., 2008).

The topic of this article (i.e. way to intelligent vehicle) may be associated with the vision of
sustainable mobility development. The improvement of technologies has made both physical
mobility - the transportation, and virtual mobility - the use of the Internet and telecommunications
technologies, easier (e.g. continuous, quicker and safer transport, more services to help people to
organize their trips etc.). The efforts to implementation of cars networking in the traffic corresponds
to the interpretation of sustainable mobility paradigm: one can understand that for mobility to be
sustainable, it must follow and adapt itself to the new environment and new needs of society while
avoiding disruptions in the societal, environmental and economic well-being that could offset the
socio-economic benefits of accessibility improvement (Desmartin, Thvoux-Chabuel, 2011). In
order to be more efficient, sustainable mobility should combine both technological improvement
and behavioral change.
Here are simplified examples of the social impacts of congestion in road transport, in connection
with advantages of car-to-x networking:
Opportunity loss / economic costs: being in a traffic jam can generally be considered
unproductive time for the people concerned, and wasted from an economical perspective.

Maintenance costs: in traffic jams, drivers have to accelerate and use the brakes frequently
and this has a negative impact on the technical components of the vehicles resulting in more
frequent repairs and replacements.

Emergencies / road death: blocked traffic may interfere the passage of emergency vehicles,
or road users can get stressed being stuck in a traffic jam which again could lead to
accidents.

Environment quality: if congestion occurs on a main road, many drivers tend to evade and
use side streets and residential zones affect of noise and air pollution.

Analysis of topics related to cars networking can refer to a wide range of contexts.
From an environmental point of view, technology that produce the inputs and outputs used
for car networking systems indicate to car drivers the shortest route from one point to
another and as a result they generate less greenhouse gas emissions, noise, pollution, energy
savings.

From a social point of view, these systems are able to provide car drivers with useful
information such as alternative ways to avoid traffic congestions, the quickest way to reach
their final destination; or accessibility of wireless communications allows interacting with
immediate environment and provides travelers continuity of information in time and space.

From a governance point of view, some public companies or national coordinator provide
updated information for their own safer traffic.

[Type here]

Governments and supranational organizations (like the European Union) are an important players in
terms of shaping the advanced mobility in global and regional dimensions, they can establish
policies, implement programs and targets, finance projects, and encourage coordination of
technology development (see e.g. concretely action plan of Czech administration (ME CR, 2010)).
Support of R&D in cars networking
Vehicle-to-vehicle and vehicle-to-infrastructure communication is expected to become a powerful
means to improve road safety and to reduce congestion. Standardization in these areas will pave the
way for the successful launch of future co-operative systems. To address these goals, a concerted
action of the relevant stakeholders is needed at the EU level, involving OEMs, automotive and
technology suppliers, road and traffic operators, service providers and public authorities, e.g.
through efforts in the Car-2-Car Communication Consortium and European Telecommunications
Standards Institute (ETSI), a range of aspects are being addressed: frequency band usage, ITS
architecture, data protocols, robust channel access methods, data security, etc.
The number of road accidents is still unacceptably high, few years ago a number of EUCAR
(European Council for Automotive R&D) Working Groups cooperated to evaluate the opportunity
that new wireless communication technologies where offering systems based on vehicle to vehicle
and vehicle to infrastructure communication to improve road safety by detects in advance
potentially dangerous situations. The goal of the EUCAR projects is (EUCAR, 2011) to create and
validate a universal technical platform to enable a future world where vehicles and infrastructure
can freely communicate, interact and cooperate, bringing benefits of greater safety and efficiency,
improved mobility and reduced energy consumption and environmental impact by means of:
A standards-based open architecture and a prototype universal reference highly flexible
platform;

A wireless network amongst vehicles and infrastructure;

Data fusion, cooperative data management and sharing;

Enhanced positioning and mapping solutions;

Innovative cooperative applications - warning and interventions.


Real time driver information provided by on-board systems, interconnected vehicles and via
roadside infrastructure may lead to more responsible and compliant driver behavior and reduce
journey stress.
The greatest barrier is the creation of global standards. Companies throughout the value net,
including external players such as government and telecommunications companies, will need to
work together to establish a common platform that enables vehicles and components from different
manufacturers and geographic locations to communicate seamlessly. The most significant
differences in vehicle connectivity by geographic region in 2020 are likely to occur in areas that
require government investment. Developed nations, particularly Japan, Germany and the United
States are expected to be the leaders in both innovating and establishing the required infrastructure.
The major barriers to mass implementation of cars networking systems in public transport
management practice are in general e.g. (Kompfner, P., 2007):
Reliable, real-time multi-modal travel and traffic information that can be accessed anytime
and anywhere.
Real open frameworks that allow for system compatibility and inter-operability that lead to
efficient area-wide traffic management within urban areas and across jurisdictional
boundaries to interurban roads and adjacent urban areas.
Unifies and standardized technology platforms that facilitate in-vehicle systems integration
(e.g. vehicle-to-vehicle communication in various cars marks).

[Type here]

Enhanced communication, interaction and co-operation of driver, vehicle and infrastructure


in synergy mode are prerequisites for further significant improvements of applications.
It is required to create co-operative support of compatible and reliable interface with invehicle connecting systems and e.g. various emergency services to exchange information
etc.
Is absent open and interoperable communication and networking platform (successful tested
in real conditions) able to use flexibly a wide range of communication media such as
cellular radio (2G/3G), wireless LAN for automotive (WAVE), short-range microwave
(DSRC) and infra-red.
Wireless positioning techniques that provide sub-meter accuracy.
Significant progress in this area is expected after re-profiling of the European global navigation
satellite system (GNSS) and integration of EGNOS (The European Geostationary Navigation
Overlay Service) and Galileo-based applications (EC, 2010).
The final key will be the rate of consumer adoption. In an environment of steadily increasing prices,
cost will be a significant factor for the consumer in determining the level of connectivity that will
be accepted. Privacy issues, such as the degree to which consumers are willing to share personal
information, will also be a concern.
Conclusion
Example of using intelligent systems and advanced applications with telecommunication, electronic
and information technology, that provide more secure and coordinated use of transport network, are
e.g.: implementation of an automatic emergency call system, monitoring the behavior of
participants in terms of traffic safety and traffic violations, cashless payment fees, information on
parking options etc.
The goal of next R&D projects is establish a cooperative vehicular communication and driving
system ensuring interoperability of all different applications of vehicle to vehicle and to
infrastructure communications for safety and mobility.
The drivers that can support the development and implementation of cars networking are made of
information technology and globalization trends. Global integration will allow that the technologies
that have been separated because of significant geographical and social barriers to be generally
available. In the future, it will be possible to integrate all mobile devices seamlessly into car. The
implementation of ICT-technologies will provide a higher level of transparency on the driving
situation ahead and thus assists the driver in complex traffic situations.
The driving forces for the development of networking to cars are identified as: technological (major
factor in the evolution society; this will have a dominant influence in the creation and use of
intelligent vehicle); economical (significantly reduce the costs caused by accidents and generate
revenue from the production of intelligent vehicles); environmental (wastes fuel and environmental
pollution will be reduced); cultural (it will increase the culture of travel and provide new activities).
Braking forces can be social and political factors. As with any new technology is expected
reluctance of its adoption. Once people are aware that it is safe and offers numerous advantages will
be followed by acceptance, but it may take longer. The political aspect of technology can block the
lobbying interests of various industry groups.

Karolis Virbickas RTE-3

5 Potential Commercial Uses of


Unmanned Aircraft Systems
As the FAA moves closer to releasing its highly anticipated1 regulations for small Unmanned
Aircraft Systems (UAS) and the agency's six testing sites throughout the U.S. continue to
research the technology, excitement keeps growing in the aviation industry about the potential
commercial uses of unmanned aircraft. Following the recent ruling by the National Transportation
Safety Board (NTSB) determining that the FAA has the authority to regulate the reckless
operation of UAS, Avionics Magazine caught up with Association of Unmanned Vehicle Systems
International (AUVSI) President and CEO Michael Toscano to discuss some of the potential
commercial uses of UAS in the National Airspace System (NAS) and around the world.

Marcus UAV's Zephyr 2 Unmanned Aircraft System, which the company says is ideal
for bridge inspections. Photo: Marcus UAV.
"What people may not realize is how many good things this technology can do," said Toscano.
While there seem to be an unlimited number of commercial applications for UAS, AUVSI has
categorized the ones with the most immediate potential of becoming a reality in the next several
years into four main categories.
"We call them four Ds. Its for the dirty, dangerous, difficult and dull jobs that humans do
everyday," said Toscano, adding that the most ripe operational environments for the commercial
use of UAS would be those environments that are low in population and have very little air traffic
volume.

Anticipate - to give advance thought, discussion, or treatment to

Recently, the FAA granted exemptions2 to six aerial photo and video production companies, the
first step to allowing the film and television industry the use of UAS in the NAS. There are
currently more than 120 pending submissions3 from individuals and companies seeking similar
exemptions to use UAS for other commercial purposes. Here are a few that could become a
reality in the near future.
Bridge Inspections
Inspections of structures such as bridges depend heavily on visual assessment4 from experienced
field inspectors. The way that most bridge inspections are handled today would fall into AUVSI's
"dangerous" category and could be done much more efficiently with UAS.
"If your job is to do bridge inspections after an earthquake the way you would have to do that
today is you would close down a lane of traffic, you put a scaffolding that swings over the side,
youd rappel5 these people down below and thats how you would inspect a bridge," said Toscano.
"Whereas now, you could take a UAS and you could fly right under a bridge and get all the
information that you need. The human beings that know how to do bridge inspections know how
to do it. What they need is the visual so they can see if theres damage done or if theres
corrosion or whatever there might be."
Interestingly enough, Seattle, Wa.-based Marcus UAV already has a Zephyr 2 UAS solution
available for $17,995 ready to perform bridge inspections.
Precision Agriculture
While there are farmers in China and other areas of the world already using UAS to monitor their
crops and other precision agriculture missions, regulations currently dont permit this use of UAS
in the United States.
"When you talk about agriculture, consider using UAS in Napa Valley to grow grapes so that a
farmer can understand when the ideal time is to pick them because they either give off a
fermenting smell or they turn a particular color of purple and thats the ideal time to pick," said
Toscano. "A farmer knows that, what he needs to know is when do all the grapes turn that color
or give off that smell? Unmanned systems can tell you that."

Exempt - free or released from some liability or requirement to which others are subject
Submission - an act of submitting to the authority or control of another
4
Assess - to make an official valuation of (property)
5
Rappel - to descend (as from a cliff) by sliding down a rope passed under one thigh, across the body, and over the
opposite shoulder or through a special friction device
3

Members of the Capital Area Innovative Farmers (CAIF) group in Lansing, Mich., together with
Michigan State University (MSU) Extension, have proactively engaged in consultations with a
Canadian UAV company and MSU Department of Geography GIS unit to seek ways to collaborate
and use UAS for precision agriculture in the future.
Wildfire Monitoring
This is another use of UAS that would fall into AUVSI's "dangerous" category, and its also
something that companies are already testing and have demonstrated the ability to do. An
Unmanned Aircraft System could give regions constantly affected by wildfires the ability to
provide better monitoring of the movement of an approaching wildfire.
Earlier this year, Insitu Pacific did exactly that, using a ScanEagle UAS General Dynamics
Mediaware's next-generation video exploitation6 system, D-VEX, streaming full-motion video
imagery along with geolocation information in near real-time with downlink7 assistance from the
Amazon cloud. The demonstration occurred in January over the Wollemi National Park, where
fires have burned more than 35,000 hectares of bush land since December 2013. The Scan Eagle
was operated at night, and was able to monitor and report on the movement of the fire, which
is too high risk to perform at low altitudes with manned aircraft.
Aerial Delivery
The ideal environment to perform aerial delivery would be for the transport of supplies to remote
locations. While Amazon made headlines8 last year showcasing9 its current research regarding
UAS deliveries to Amazon customers, there are already examples of this occurring right now. In
September, Deutsche Post DHL AG publicly announced its plans to use a parcelcopter to deliver
medicine to the small German island of Juist.
This was part of a month-long research project in partnership with Microdrones GmbH, using
ground-based pilots that were in contact with Air Traffic Controllers (ATCs) to monitor the
delivery, which was performed with a bright yellow quadcopter. "Theres two things that
unmanned systems do well, theyre very good at delivery and situational awareness," said
Toscano. "Its a revolutionary technology on an evolutionary path. This is no different than weve
seen with other revolutionary type technology."

Exploit - deed, act; especially : a notable or heroic act


Downlink - a communications channel for receiving transmissions from a spacecraft ; also : such transmissions
8
Headlines - a head of a newspaper story or article usually printed in large type and giving the gist of the story or
article that follows
9
Showcase - a setting, occasion, or medium for exhibiting something or someone especially in an attractive or
favorable aspect
7

Pipeline Inspection
Inspection of pipelines in the Alaskan tundra presents an excellent opportunity for commercial
use of Unmanned Aircraft Systems. Performing this using low-flying aircraft can be unsafe for
pilots because of the snow and wind conditions that are typical of Alaska. However, by using a
camera-equipped UAS, this could be more easily performed. BP's Houston, Texas-based chief
technology office has already expressed interest in using UAS for this type of operation.
"Pipeline inspection, that's another dangerous mission that currently is performed by humans,"
said Toscano. "In those environments you're not going to have a lot of other aircraft to worry
about. Once there are regulations in place, that could be a huge market for this type of
technology."
Canadian UAS manufacturer ING Robotic Aviation currently recommends using its Serenity UAS
platform with mapping capability and three ground-based operators to perform pipeline
surveillance10 and monitoring along the more than 512,000 miles of oil and gas pipeline in
Canada.
However, there are still doubts on how UAS could also be used in harmful ways. But Toscano
compares it with the risks associated with one of the most widely used technologies in the world,
the automobile.
"In the United States we kill 33,000 people every year, we have 6.3 million accidents and it costs
us over 300 billion dollars in medical costs and damages, yet we drive cars everyday," said
Toscano. "My car and your car has the ability to go 120 miles per hour but if you drove it 120
mph youd be thrown in jail and if you drove it 120 mph and killed somebody youd be thrown
in jail for a very long period of time. If you use the technology that youve been given youre
going to be held accountable. The same thing is true for UAS. If you misuse this technology for
what its not supposed to do and if you hurt somebody misusing it, you need to go to jail for a
long period of time."

10

Surveillance - close watch kept over someone or something (as by a detective)

Andrius Vitkus RT-3

Twisted Radio Beams Data at 32 Gigabits


per Second
By Ian Chant
Posted 19 Sep 2014 | 19:36 GMT
http://spectrum.ieee.org/tech-talk/telecom/wireless/sending-data-on-twisted-radio-waves

Multiple channels of data orbiting orthogonally around a single radio wave


A team led by engineers at the University of Southern California has sent multiple channels of
data over a single frequency by twisting them together into a beam resembling a piece of fusilli
pasta. By combining several polarized beams carrying information into a single spiraled beam,
the team was able to send up to 32 gigabits per second across 2.5 meters of open air, a rate
around 30 times as fast as an LTE wireless connection.
The high data rate was made possible through a technique known as orbital angular momentum
(OAM) multiplexing, says USC electrical engineering professor Alan Willner, who partnered
with researchers from the University of Glasgow and Tel Aviv University on the experiment.

A property of electromagnetic waves first identified in the 1990s, OAM can be harnessed to let
multiple channels of information ride along a single frequency. I could have a wave that twists
slowly and one that twists a little faster, and those waves are now orthogonal to one another,
Willner says. If you put them together and send them spatially co-located through the same
medium, you have doubled your capacity.
Willner and others have previously demonstrated the twisting technique with beams of light,
reaching data transmission speeds of 2.56 terabits per second through the air in 2012 and 1.6
Tb/s over optical fiber in 2013. Proving the technology works at high data rates with radio waves
is important, because those frequencies are less affected by obstacles and atmospheric conditions
than optics and could have broader commercial applications.
Willner and his team used four antennas to send eight channels of data. Those beams of data
were sent through specially shaped spiral phase plates, plastic plates that dont absorb the
beams but do cause them to change their shape, twisting them slightly. The twisted waves are
then gathered by a multiplexer and sent through a single transmitter aperture. Since each wave
has a slightly different OAM, they can travel along a shared axis without interfering with one
another.
The combined beam, which takes on a helical shape, travels through another aperture at the
receiver, after which it is split back into four beams by a demultiplexer. The four beams then
pass through another set of spiral phase plates. These plates are inverted versions of the first set,
which undo the initial twisting and prepare the waves to deliver their data payload.
This isnt the first time radio waves have been used to demonstrate the potential of OAM. Italian
and Swedish researchers in 2012 used the same principles to send a pair of radio waves sharing a
single frequency between two islands in Venice. At the time, some communications engineers
criticized that work, suggesting it was not significantly different than existing multiple-input,
multiple-output (MIMO) techniques.
Willner says this latest study demonstrates that there are clear implementation differences
between conventional MIMO and OAM multiplexing. MIMO sends different streams of data
from different antennas broadcasting on the same frequency and decodes the inevitable cross talk
on the receiving end using digital signal processing. OAM multiplexing sends multiple channels
of information along a single beam without any interference between them. That means once the
phase plate at the receiver unwinds the helical beam into its component channels, they dont have
to undergo further cleanup.
>One of OAM radios early critics, Lund University radio-systems professor Ove Edfors,
remains unconvinced, however. While he doesnt question Willners results, Edfors remains
incredulous that radio-based OAM could be made practical for long-range communications.
Without the assistance of impractically large antennas to transmit and receive them over long
distances, Edfors says now, signals carried onOAM components rapidly become useless from
a communication point of view.

Fabrizio Tamburini, one of the scientists behind the 2012 Venice OAM experiment, sees a lot of
promise in the latest work, saying that very good ideas can come from it. Tamburini is now
working on ways to refine OAM for use in telecommunications and other industries.
If OAM pans out, the technology could be adopted in places where high-speed, line-of-sight
wireless connections are in demand, such as for wireless backhaul in cellular networks, suggests
Willner. OAM could be a good fit for transmitting data among a dense network of small base
stations without...the stringing of fiber to connect them to the core network, he says. He and
USC colleague Andy Molisch also see the potential for OAM in data centers. With better
equipment, [transmission rates] could go much higher, Willner says. A radio backhaul like that
could be a huge pipe for data centers or building-to-building connections.
OAM techniques might also be used in other fields, such as microscopy, Willner says. There are
potential applications outside of communications. Were going to continue learning how to tailor
and manipulate the structure of waves in ways weve never thought of before.
Words:

Orthogonal - relating to an angle of 90 degrees, or forming an angle of 90 degrees.


Angular - having or relating to one or more angles
Multiplexer - a piece of electronic equipment that can send more than one electrical
signal using only one connection
Aperture - a small and often narrow opening, especially one that allows light into a
camera
Helix(helical) - a curve that goes around a central tube or cone shape in the form of a
spiral.
Implement - to start using a plan or system:
Incredulous - not wanting or not able to believe something, and usually showing this.
Backhaul - a return journey of a vehicle after it has transported and delivered goods.

Anda mungkin juga menyukai