Anda di halaman 1dari 250

# Control of Nonlinear Dynamic Systems:

## Theory and Applications

J . K. Hedrick and A. Girard

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
1

1 `Introduction

We consider systems that can be written in the following general form, where x is the
state of the system, u is the control input, w is a disturbance, and f is a nonlinear function.

We are considering dynamical systems that are modeled by a finite number of coupled,
first-order ordinary differential equations. The notation above is a vector notation, which
allows us to represent the system in a compact form.

Key points

Few physical systems are truly linear.
The most common method to analyze and design controllers for system is to
and then to use linear control techniques.
There are systems for which the nonlinearities are important and cannot be
ignored. For these systems, nonlinear analysis and design techniques exist and
can be used. These techniques are the focus of this textbook.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
2
In many cases, the disturbance is not considered explicitly in the system analysis, that is,
we consider the system described by the equation . In some cases we will
look at the properties of the system when f does not depend explicitly on u, that is,
. This is called the unforced response of the system. This does not
necessarily mean that the input to the system is zero. It could be that the input has been
specified as a function of time, u = u(t), or as a given feedback function of the state, u =
u(x), or both.

When f does not explicitly depend on t, that is, if , the system is said to be
autonomous or time invariant. An autonomous system is invariant to shifts in the time
origin.

We call x the state variables of the system. The state variables represent the minimum
amount of information that needs to be retained at any time t in order to determine the
future behavior of a system. Although the number of state variables is unique (that is, it
has to be the minimum and necessary number of variables), for a given system, the
choice of state variables is not.

Linear Analysis of Physical Systems

The linear analysis approach starts with considering the general nonlinear form for a
dynamic system, and seeking to transform this system into a linear system for the
purposes of analysis and controller design. This transformation is called linearization
and is possible at a selected operating point of the system.

Equilibrium points are an important class of solutions of a differential equation. They
are defined as the points x
e
such that:

A good place to start the study of a nonlinear system is by finding its equilibrium points.
This in itself might be a formidable task. The system may have more than one
equilibrium point. Linearization is often performed about the equilibrium points of the
system. They allow one to characterize the behavior of the solutions in the neighborhood
of the equilibrium point.

If we write x, u and w as a constant term, followed by a perturbation, in the following
form:

We first seek equilibrium points that satisfy the following property:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
3

We then perform a multivariable Taylor series expansion about one of the equilibrium
points x
0
, u
0
, w
0
. Without loss of generality, assume the coordinates are transformed so
that x
0
= 0. HOT designates Higher Order Terms.

We can set:

The dimensions of A are n by n, B is n by m, and ! is n by p.

We obtain a linear model for the system about the equilibrium point (x
0
, u
0
, w
0
) by
neglecting the higher order terms.

Now many powerful techniques exist for controller design, such as optimal linear state
space control design techniques, H
!
control design techniques, etc This produces a
feedback law of the form:

This yields:

Evaluation and simulation is performed in the following sequence.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
4

Figure 1.1. Linearized system design framework

Suppose the simulation did not yield the expected results. Then the higher order terms
that were neglected must have been significant. Two types of problems may have arisen.

a. When is the existence of a Taylor series guaranteed?

The function (and the nonlinearities of the system) must be smooth and free of
discontinuities. Hard (non-smooth or discontinuous) nonlinearities may be caused by
friction, gears etc

Figure 1.2. Examples of hard nonlinearities.

b. Some systems have smooth nonlinearities but wide operating ranges.

Linearizations are only valid in a neighborhood of the equilibrium point.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
5

Figure 1.3. Smooth nonlinearity over a wide operating range. Which slope should be
pick for the linearization?

The nonlinear design framework is summarized below.

Figure 1.4. Nonlinear system design framework

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

6

2 General Properties of
Linear and Nonlinear
Systems

Aside: A Brief History of Dynamics (Strogatz)

The subject of dynamics began in the mid 1600s, when Newton invented differential
equations, discovered his laws of motion and universal gravitation, and combined them to
explain Keplers laws of planetary motion. Specifically, Newton solved the two-body
problem: the problem of calculating the motion of the earth around the sun, given the
inverse square law of gravitational attraction between them. Subsequent generations of
mathematicians and physicists tried to extend Newton analytical methods to the three
body problem (e.g. sun, earth and moon), but curiously the problem turned out to be
Key points

Linear systems satisfy the properties of superposition and homogeneity. Any
system that does not satisfy these properties is nonlinear.
In general, linear systems have one equilibrium point at the origin. Nonlinear
systems may have many equilibrium points.
Stability needs to be precisely defined for nonlinear systems.
The principle of superposition does not necessarily hold for forced response for
nonlinear systems.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

7
much more difficult to solve. After decades of effort, it was eventually realized that the
three-body problem was essentially impossible to solve, in the sense of obtaining explicit
formulas for the motions of the three bodies. At this point, the situation seemed hopeless.
The breakthrough came with the work of Poincare in the late 1800s. He introduced a new
viewpoint that emphasized qualitative rather than quantitative questions. For example,
instead of asking for the exact positions of the planets at all times, he asked: Is the solar
system stable forever, or will some planets eventually fly off to infinity. Poincare
developed a powerful geometric approach to analyzing such questions. This approach has
flowered into the modern subject of dynamics, with applications reaching far beyond
celestial mechanics. Poincare was also the first person to glimpse the possibility of chaos,
in which a deterministic system exhibits aperiodic behavior that depends sensitively on
initial conditions, thereby rendering long term prediction impossible.

But chaos remained in the background for the first half of this century. Instead, dynamics
was largely concerned with nonlinear oscillators and their applications in physics and
engineering. Nonlinear oscillators played a vital role in the development of such
technologies as radio, radar, phase-locked loops, and lasers. On the theoretical side,
nonlinear oscillators also stimulated the invention of new mathematical techniques
pioneers in this area include van der Pol, Andropov, Littlewood, Cartwright, Levinson,
and Smale. Meanwhile, in a separate development, Poincares geometric methods were
being extended to yield a much deeper understanding of classical mecahsnics, thanks to
the work of Birkhoff and later Kolmogorov, Arnold, and Moser.

The invention of the high-speed computer in the 1950s was a watershed in the history of
dynamics. The computer allowed one to experiment with equations in a way that was
impossible before, and therefore to develop some intuition about nonlinear systems. Such
experiments led to Lorenzs discovery in 1963 of chaotic motion on a strange attractor.
He studied a simplified model of convection rolls in the atmosphere to gain insight into
the notorious unpredictability of the weather. Lorenz found that the solutions to his
equations never settled down to an equilibrium or periodic state instead, they continued
to oscillate in an irregular, aperiodic fashion. Moreover, if he started his simulations from
two slightly different initial conditions, the resulting behaviors would soon become
totally different. The implication was that the system was inherently unpredictable tiny
errors in measuring the current state of the atmosphere (or any other chaotic system)
would be amplified rapidly, eventually leading to embarrassing forecasts. But Lorenz
also showed that there was structure in the chaos when plotted in three dimensions, the
solutions to his equations fell onto a butterfly shaped set of points. He argued that this set
had to be an infinite complex of surfaces today, we would regard it as an example of
a fractal.

Lorenzs work had little impact until the 1970s, the boom years for chaos. Here are some
of the main developments of that glorious decade. In 1971 Ruelle and Takens proposed a
new theory for the onset of turbulence in fluids, based on abstract considerations about
strange attractors. A few years later, May found examples of chaos in iterated mappings
arising in population biology, and wrote an influential review article that stressed the
pedagogical importance of studying simple nonlinear systems, to counterbalance the
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

8
often misleading linear intuition fostered by traditional education. Next came the most
surprising discovery of all, due to the physicist Feigenbaum. He discovered that there are
certain laws governing the transition from regular to chaotic behavior. Roughly speaking,
completely different systems can go chaotic in the same way. His work established a link
between chaos and phase transitions, and enticed a generation of physicists to the study
of dynamics. Finally, experimentalists such as Gollub, Libchaber, Swinney, Linsay,
Moon, and Westervelt tested the new ideas about chaos in experiments on fluids,
chemical reactions, electronic circuits, mechanical oscillators, and semiconductors.

Although chaos stole the spotlight, there were two other major developments in dynamics
in the 1970s. Mandelbrot codified and popularized fractals, produced magnificient
computer graphics of them, and showed how they could be applied to a variety of
subjects. And in the emerging area of mathematical biology, Winfree applied the methods
of dynamics to biological oscillations, especially circadian (roughly 24 hour) rhythms and
heart rhythms.

By the 1980s, many people were working on dynamics, with contributions too numerous
to list.

Lorenz attractor
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

9

Dynamics A Capsule History

1666 Newton Invention of calculus
Explanation of planetary motion

1700s Flowering of calculus and classical mechanics

1800s Analytical studies of planetary motion

1890s Poincare Geometric approach, nightmares of chaos

1920-1950 Nonlinear oscillators in physics and engineering

1920-1960 Birkhoff Complex behavior in Hamiltonian mechanics
Kolmogorov
Arnold
Moser

1963 Lorenz Strange attractor in a simple model of convection

1970s Ruelle/Takens Turbulence and chaos

May Chaos in logistic map

Feigenbaum Universality and renormalization
Connection between chaos and phase transitions

Expertimental studies of chaos

Winfree Nonlinear oscillators in biology

Mandelbrot Fractals

1980s Widespread interest in chaos, fractals, oscillators
and their applications.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

10

In this chapter, we consider general properties of linear and nonlinear systems. We
consider the existence and uniqueness of equilibrium points, stability considerations, and
the properties of the forced response. We also present a broad classification of
nonlinearities.

Where Do Nonlinearities Come From?

Before we start a discussion of general properties of linear and nonlinear systems, lets
briefly consider standard sources of nonlinearities.

Many physical quantities, such as a vehicles velocity, or electrical signals, have an upper
bound. When that upper bound is reached, linearity is lost. The differential equations
governing some systems, such as some thermal, fluidic, or biological systems, are
nonlinear in nature. It is therefore advantageous to consider the nonlinearities directly
while analyzing and designing controllers for such systems. Mechanical systems may be
designed with backlash this is so a very small signal will produce no output (for
example, in gearboxes). In addition, many mechanical systems are subject to nonlinear
friction. Relays, which are part of many practical control systems, are inherently
nonlinear. Finally, ferromagnetic cores in electrical machines and transformers are often
described with nonlinear magnetization curves and equations.

Formal Definition of Linear and Nonlinear Systems

Linear systems must verify two properties, superposition and homogeneity.

The principle of superposition states that for two different inputs, x and y, in the domain
of the function f,

The property of homogeneity states that for a given input, x, in the domain of the function
f, and for any real number k,

Any function that does not satisfy superposition and homogeneity is nonlinear. It is worth
noting that there is no unifying characteristic of nonlinear systems, except for not
satisfying the two above-mentioned properties.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

11
Dynamical Systems

There are two main types of dynamical systems: differential equations and iterated maps
(also known as difference equations). Differential equations describe the evolution of
systems in continuous time, whereas iterated maps arise in problem where time is
discrete. Differential equations are used much more widely in science and engineering,
and we shall therefore concentrate on them.

Confining our attention to differential equations, the main distinction is between ordinary
and partial differential equations. Our concern here is purely with temporal behavior, and
so we will deal with ordinary differential equations exclusively.

A Brief Reminder on Properties of Linear Time Invariant Systems

Linear Time Invariant (LTI) systems are commonly described by the equation:

In this equation, x is the vector of n state variables, u is the control input, and A is a
matrix of size (n-by-n), and B is a vector of appropriate dimensions. The equation
determines the dynamics of the response. It is sometimes called a state-space realization
of the system. We assume that the reader is familiar with basic concepts of system
analysis and controller design for LTI systems.

Equilibrium point

An important notion when considering system dynamics is that of equilibrium point.
Equilibrium points are considered for autonomous systems (no explicit control input).

Definition:

A point x
0
in the state space is an equilibrium point of the autonomous system if
when the state x reaches x
0
, it stays at x
0
for all future time.

That is, for an LTI system, the equilibrium point is the solutions of the equation:

If A has rank n, then x
0
= 0. Otherwise, the solution lies in the null space of A.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

12
Stability

The system is stable if .

A more formal statement would talk about the stability of the equilibrium point in the
sense of Lyapunov. There are many kinds of stability (for example, bounded input,
bounded output) and many kinds of tests.

Forced response

The analysis of forced response for linear systems is based on the principle of
superposition and the application of convolution.

For example, consider the sinusoidal response of LTIS.

The output sinusoids amplitude is different than that of the input and the signal also
exhibits a phase shift. The Bode plot is a graphical representation of these changes. For
LTIS, it is unique and single-valued.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

13

Example of a Bode plot. The horizontal axis is frequency, !. The vertical axis of the top
plot represents the magnitude of |y/u| (in dB, that is, 20 log of), and the lower plot
represents the phase shift.

As another example, consider the Gaussian response of LTIS.

If the input into the system is a Gaussian, then the output is also a Gaussian. This is a
useful result.

Why are nonlinear problems so hard?

Why are nonlinear systems so much harder to analyze than linear ones? The essential
difference is that linear systems can be broken down into parts. Then each part can be
solved separately and finally recombined to get the answer. This idea allows fantastic
simplification of complex problems, and underlies such methods as normal modes,
Laplace transforms, superposition arguments, and Fourier analysis. In this sense, a linear
system is precisely equal to the sum of its parts.

But many things in nature dont act this way. Whenever parts of a system interfere, or
cooperate, or compete, there are nonlinear interactions going on. Most of everyday life is
nonlinear, and the principle of superposition fails spectacularly. If you listen to your two
favorite songs at the same time, you wont get double the pleasure! Within the realm of
physics, nonlinearity if vital to the operation of a laser, the formation of turbulence in a
fluid, and the superconductivity of Josephson junctions, for example.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

14

Nonlinear System Properties

Equilibrium point

Reminder:

A point x
0
in the state space is an equilibrium point of the autonomous system
if when the state x reaches x
0
, it stays at x
0
for all future time.

That is, for a nonlinear system, the equilibrium point is the solutions of the equation:

One has to solve n nonlinear algebraic equations in n unknowns. There might be between
0 and infinity solutions.

Example: Pendulum

L is the length of the pendulum, g is the acceleration of gravity, and " is the angle of the
pendulum from the vertical.

The equivalent (nonlinear) system is:

!
x
1
= x
2
x
2
= "
k
mL
2
x
2
"
g
L
sin x
1
#
\$
%
&
%

Nonlinearity makes the pendulum equation very difficult to solve analytically. The usual
way around this is to fudge, by invoking the small angle approximation for
!
sin x " x
for x<<1. This converts the problem to a linear one, which can then be solved easily. But
by restricting the problem to small x, were throwing out some of the physics, like
motions where the pendulum whirls over the top. Is it really necessary to make such
drastic approximations?
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

15

It turns out that the pendulum equation can be solved analytically, in terms of elliptic
functions. But there ought to be an easier way. After all, the motion of the pendulum is
simple: at low energy, it swings back and forth, and at high energy it whirls over the top.
There should be a way of extracting this motion from the system directly. This is the sort
of problem we will learn how to solve, using geometric methods.

Heres the rough idea. Suppose we happen to know a solution to the pendulum system,
for a particular initial condition. This solution would be a pair of functions x
1
(t) and x
2
(t)
representing the angular position and velocity of the pendulum. If we construct an
abstract space with coordinates (x
1
, x
2
), then the solution (x
1
(t), x
2
(t)) corresponds to a
point moving along a curve in this space.

This curve is called a trajectory, and the space is called the phase space of the system.
The phase space is completely filled with trajectories, since each point can serve as an
initial condition.

Our goal is to run this construction in reverse: given the system, we want to draw the
system trajectories, and thereby extract information about the solutions. In many cases,
geometric reasoning allows us to draw the trajectories without actually solving the
system!

If we plot the angular position against the angular speed for the pendulum, we obtain a
phase-plane plot.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

16

Example: Mass with Coulomb friction

Stability

One must take special care to define what is meant by stability.
For nonlinear systems, stability is considered about an equilibrium point, in the
sense of Lyapunov or in an input-output sense.
Initial conditions can affect stability (this is different than for linear systems), and
so can external inputs.
Finally, it is possible to have limit cycles.

Example:

A limit cycle is a unique, self-excited oscillation. It is also a closed trajectory in the state-
space.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

17

In general, a limit cycle is an unwanted feature in a mechanical system, as it causes
fatigue.

Beware: a limit cycle is different from a linear oscillation.

Note that in other application domains, for example in communications, a limit cycle
might be a desirable feature.

In summary, be on the lookout for this kind of behavior in nonlinear systems. Remember
that in nonlinear systems, stability, about an equilibrium point:
Is dependent on initial conditions
Local vs. global stability is important
Possibility of limit cycles

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

18

Forced response

The principle of superposition does not hold in general. For example for initial conditions
x
0
, the system may be stable, but for initial conditions 2x
0
, the system could be unstable.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

19

Classification of Nonlinearities

Single-valued, time invariant

Memory or hysteresis

Example:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

20

Single-input vs. multiple input nonlinearities

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

21

SUMMARY: General Properties of Linear and Nonlinear Systems

LINEAR SYSTEMS

NONLINEAR SYSTEMS

EQUILIBIUM POINTS

A point where the system can stay forever
without moving.

UNIQUE

If A has rank n, then x
e
=0, otherwise the
solution lies in the null space of A.

MULTIPLE

f(x
e
)=0
n nonlinear equations in n unknowns
0 ! +" solutions

ESCAPE TIME

x ! +" as t ! +"

The state can go to infinity in finite time.

STABILITY

The equilibrium point is stable if all
eigenvalues of A have negative real part,
regardless of initial conditions.

Dependent on IC
Local vs. Global stability
important
Possibility of limit cycles

LIMIT CYCLES

A unique, self-excited
oscillation
A closed trajectory in the state
space
Independent of IC

FORCED RESPONSE

The principle of superposition
holds.
I/O stability ! bounded input,
bounded output
Sinusoidal input ! sinusoidal
output of same frequency

The principle of superposition
does not hold in general.
The I/O ratio is not unique in
general, may also not be single
valued.

CHAOS

exhibit randomness despite the
deterministic nature of the system.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

22
A Dynamical View of the World (Strogatz)

One axis tells us the number of variables needed to characterize the state of the
system. Equivalently, this number is called the dimension of the phase space. The
other dimension tells us whether the system is linear or nonlinear.

Admittedly, some aspects of the picture are debatable. You may think that some
topics should be added, or place differently, or even that more axes are needed. The
point is to think about classifying systems on the basis of their dynamics.

There are some striking patterns in the above figure. All the simplest systems occur in
the upper left hand corner. These are the small linear systems that we learn about in
the first few years of college. Roughly speaking, these linear systems exhibit growth,
decay or equilibrium when n = 1, or oscillations when n = 2. For example, an RC
circuit has n = 1 and cannot oscillate, whereas an RLC circuit has n = 2 and can
oscillate.

The next most familiar part of the picture is the upper right hand corner. This is the
domain of classical applied mechanics and mathematical physics where the linear
partial differential equations live. Here we find Maxwells equations of electricity and
magnetism, the heat equation and so on. These partial differential equations involve
an infinite continuum of variables because each point in space contributes
additional degrees of freedom. Even though such systems are large, they are tractable,
thanks to such linear techniques as Fourier analysis and transform methods.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

23
In contrast, the lower half of the figure (the nonlinear half) is often ignored or
deferred to other courses. No more In this class, we will start at the lower left hand
corner and move to the right. As we increase the phase space dimension from n = 1 t
n = 3, we encounter new phenomena at every step, from fixed points and bifurcations
when n = 1 to nonlinear oscillations when n = 2 to chaos and fractals when n = 3. In
all cases, a geometric approach proves to be powerful and gives us most of the
information we want, even though we cant usually solve the equations in the
Youll notice that the figure also contains a region forbiddingly marked The
frontier. Its like in those old maps of the world, where the mapmakers wrote Here
there be dragons on the unexplored parts of the globe. These topics are not
completely unexplored, but it is fair to say that they lie at the limits of current
understanding. These problems are very hard, because they are both large and
nonlinear. The resulting behavior is typically complicated in both space and time, as
in the motion of a turbulent fluid or the patterns of electrical activity in a fibrillating
heart. Towards the end of the course, time permitting, we will touch on some of these
problems.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

24

3 Phase-Plane Analysis

Phase plane analysis is a technique for the analysis of the qualitative behavior of second-
order systems. It provides physical insights.

Reference: Graham and McRuer, Analysis of Nonlinear Control Systems, Dover Press,
1971.

Consider the second-order system described by the following equations:

x
1
and x
2
are states of the system
p and q are nonlinear functions of the states

Key points

Phase plane analysis is limited to second-order systems.
For second order systems, solution trajectories can be represented by curves in
the plane, which allows for visualization of the qualitative behavior of the
system.
In particular, it is interesting to consider the behavior of systems around
equilibrium points. Under certain conditions, stability information can be
inferred from this.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

25
phase plane = plane having x
1
and x
2
as coordinates
! get rid of time

We look for equilibrium points of the system (also called singular points), i.e. points at
which:

Example:

Find the equilibrium point(s) of the system described by the following equation:

Start by putting the system in the standard form by setting :

We have the following equilibrium point:

Looking at the slope of the phase plane trajectory:

Investigate the linear behaviour about a singular point:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

26

Set

Then

!
x = Ax with
!
a b
c d
"
#
\$
%
&
'

This is the general form of a second-order linear system.

Such a system is linear in the sense that if x
1
and x
2
are solutions, then so is any linear
combination c
1
x
1
+c
2
x
2
. Notice that
!
x = 0 when x=0, so the origin is always an
equilibrium point for any choice of A. The solutions of
!
x = Ax can be visualized as
trajectories moving on the (x
1
, x
2
) plane, in this context called the phase plane.

Phase Plane Example: Simple Harmonic Oscillator

As discussed in elementary physics courses, the vibrations of a mass hanging from a
linear spring are governed by the linear differential equation:
!
m x + kx = 0 where m is the
mass, k is the spring constant, and x is the displacement of the mass from equilibrium.

As youll probably recall, it is easy to solve the equation in terms of sines and cosines.
This is what makes linear systems so special. For the nonlinear equations of ultimate
interest to us, its usually impossible to find an analytic solution. We want to develop
methods to deduce the behaviour of ODEs without actually solving them.

A vector field that comes from the original differential equation determines the motion in
the phase plane. To find this vector field, we note that the state of the system is
characterized by its current position x and velocity v. If we know the values of both x and
v, then the equation above uniquely determines the future states of the system. We can
rewrite the ODE in terms of the state variables, as follows:

!
x = v
v = "
k
m
x = "#
2
x

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

27

This system assigns a vector
!
( x , v ) to each point (x,v) and therefore represents a vector
field on the phase plane.

For example, lets see what the vector field looks like when were on the x-axis. Then, v
= 0, so
!
( x , v ) = (0, "#
2
x). The vectors point vertically downward for positive x and
vertically upward for negative x. As x gets larger in magnitude, the vectors get longer.
Similarly, on the v axis, the vector field is
!
( x , v ) = (v,0), which points to the right when
v>0 and to the left when v<0.

The flow above swirls about the origin. The origin is special, like the eye of a hurricane.
A phase point placed there would remain motionless, because
!
( x , v ) = (0,0) when
!
(x,v) = (0,0). The origin is a fixed point (or an equilibrium point). But a phase point
starting anywhere else would circulate around the origin and eventually return to its
starting point. Such trajectories form closed orbits. The figure below is called the phase
portrait of the system. It shows the overall picture of trajectories in the phase space.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

28
What do fixed points and closed orbits have to do with the problem of a mass on a
spring? The answers are beautifully simple. The fixed point
!
(x,v) = (0,0) corresponds to a
static equilibrium of the system: the mass is at rest at its equilibrium position and will
remain there forever, since the spring is relaxed. The closed orbits have a more
interesting interpretation: they correspond to periodic motion, that is, oscillations of the
mass. To see this, we can look at some points on a closed orbit. When the displacement x
is most negative, the velocity v is zero. This corresponds to one extreme of the
oscillation, when the spring is most compressed. In the next instant, as the phase point
flows along the orbit, it is carried to points where x has increased and v is now positive;
the mass is being pushed back towards its equilibrium position. But by the time the mass
has reached x=0, it has a large positive velocity, and it overshoots x=0. The mass
eventually comes to rest at the other end of the swing, where x is most positive and v is
zero again. Then the mass gets pulled up and completes the cycle.

The shape of the closed orbits also has an interesting physical interpretation. The orbits
are actually ellipses given by the equation
!
"
2
x
2
+v
2
= C, where C is a positive constant.
One can show that this geometric result is equivalent to conservation of energy.

Back to the phase plane method:
Next, we obtain the characteristic equation:

!
det
a " # b
c d " #
\$
%
&
'
(
)
= 0
which yields
!
(" # a)(" # d) #bc = 0

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

29

!
"
1,2
=
a + d
2

(a + d)
2
2

This yields the following possible cases:

!
1
, !
2
real and negative Stable node
!
1
, !
2
real and positive Unstable node
!
1
, !
2
real and opposite signs Saddle point
!
1
, !
2
complex and negative real parts Stable focus
!
1
, !
2
complex and positive real parts Unstable focus
!
1
, !
2
complex and zero real parts Center

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

30

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

31

In Class Problem:

Graph the phase portraits for the linear system
!
x = Ax where
!
A =
a 0
0 "1
#
\$
%
&
'
(

Solution: the system can be written as:
!
x = ax
y = "y

The equations are uncoupled. In this simple case, each equation may be solved
separately. The solution is:

!
x(t) = x
0
e
at
y(t) = y
0
e
"t

The phase portraits for different values of a are shown below. In each case, y decays
exponentially. Name the different cases.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

32
A complete phase space analysis: Lotka-Volterra Model

We consider here the classic Lotka-Volterra model of competition between two species,
here imagined to be rabbits and sheep. Suppose both species are competing for the same
food supply (grass) and the amount available is limited. Also, lets ignore all other
complications, like predators, seasonal effects, and other sources of food. There are two
main effects that we wish to consider:

1. Either species would grow to its carrying capacity in the absence of the other.
This can be modeled by assuming logistic growth for each species. Rabbits have a
legendary ability to reproduce, so perhaps we should assign them a higher
intrinsic growth rate.
2. When rabbits and sheep encounter each other, the trouble starts. Sometimes the
rabbit gets to eat, but more usually the sheep nudges the rabbit aside and starts
nibbling (on the grass). Well assume that these conflicts occur at a rate
proportional to the size of each population. (If there are twice as many sheep, the
odds of a rabbit encountering a sheep are twice as great). Also, assume that the
conflicts reduce the growth rate for each species, but the effect is more severe for
the rabbits.

A specific model that incorporates these assumptions is:
!
x = x(3 " x "2y)
y = y(2 " x " y)

where x(t) is the population of rabbits and y(t) is the population of sheep. Of course, x
and y are positive. The coefficients have been chosen to reflect the described scenario,
but are otherwise arbitrary.

There are four fixed points for this system: (0,0), (0,2), (3,0) and (1,1). To classify them,
we start by computing the Jacobian:

!
A =
3 "2x " y "2x
"y 2 " x "2y
#
\$
%
&
'
(

To do the analysis, we have to consider the four points in turn.

(0,0): Then
!
A =
3 0
0 2
"
#
\$
%
&
'

The eigenvalues are both positive at 3 and 2, so this is an unstable node. Trajectories
leave the origin parallel to the eigenvector for !=2, that is, tangential to v = (0,1), which
spans the y-axis. (General rule at a node, the trajectories are tangential to the slow
eigendirection, which is the eigendirection with the smallest |!|.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

33

(0,2): Then
!
A =
"1 0
"2 "2
#
\$
%
&
'
(

The matrix has eigenvalues -1, -2. The point is a stable node. Trajectories approach along
the eigendirection associated with -1. You can check that this direction is spanned by (1, -
2).

(3,0): Then
!
A =
"3 "6
0 "1
#
\$
%
&
'
(

The matrix has eigenvalues -1, -3. The point is a stable node. Trajectories approach along
the slow eigendirection. You can check that this direction is spanned by (3, -1).

(1,1): Then
!
A =
"1 "2
"1 "1
#
\$
%
&
'
(

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

34

The matrix has eigenvalues
!
"1 2 . This is a saddle point. The phase portrait is as show
below:

Assembling the figures, we get:

Also, the x and y axes remain straight line trajectories, since
!
x = 0 when x=0 and
similarly
!
y = 0 when y=0.

We can assemble the entire phase portrait:

This phase portrait has an interesting biological interpretation. It shows that one species
generally drives the other to extinction. Trajectories starting below the stable manifold
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

35
lead to the eventual extinction of the sheep, while those starting above lead to the
eventual extinction of the rabbits. This dichotomy occurs in other models of competition
and has led biologist to formulate the principle of competitive exclusion, which states that
two species competing for the same limited resource cannot typically co-exist.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

36

Stability (Lyapunovs First Method)

Consider the system described by the equation:

Write x as :

Then

Lyapunov proved that the eigenvalues of A indicate local stability of the nonlinear
system about the equilibrium point if:

a) (The linear terms dominate)

b) There are no eigenvalues with zero real part.

Example:

Consider the equation:

If x is small enough, then

Thought question: What if a = 0?

Example: Simplified satellite control problem

Built in the 1960s.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

37
After about one month, would run out of gas.

How was the controller designed?

Lets pick .

It cold in space: the valves would freeze open. If and are small, there is not enough
torque to break the ice, so the valves get frozen open and all the gas escapes. One
solution is either relay control and / or bang-bang control. (These methods are inelegant).

Pick , and .

Case 1: Pick u = 0. The satellite just floats.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

38

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

39

In the thick black line interval, all trajectories point towards the switching line.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

40

On the line, . (a>0).

On average:

On the average, the trajectory goes to the origin.

Introduction to Sliding Mode Control (also called Variable Structure Control)

Consider the system governed by the equation:

Inspired by the previous example, we select a control law of the form:

where . How should we pick the function s?

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

41

Case 1:

This does not yield the performance we want.

Case 2:

This does not yield the performance we want.

Case 3:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

42

When is ?

Let s>0. Then

That is, if s>0, iff

Example

Consider the system governed by the equation:

where d(t) is an unknown disturbance. The disturbance d is bounded, that is,

The goal of the controller is to guarantee the type of response shown below.

1) Is it possible to design a controller that guarantees this response assuming no
bounds on u?
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

43
2) If your answer on question (1) is yes, design the controller.
The desired behavior is a first-order response. Define

If s=0, we have the desired system response. Hence our goal is to drive s to zero.

If u appears in the equation for s, set s=0 and solve for u. Unfortunately, this is not the
case. Keep differentiating the equation for s until u appears.

Look for the condition for .

We therefore select u to be:

The first term dictates that one always approaches zero. The second term is called the
switching term. The parameter " is a tuning parameter that governs how fast one goes to
zero.

Once the trajectory crosses the s=0 line, the goals are met, and the system slides
along the line. Hence the name sliding mode control.

Does the switching surface s have to be a line?

No, but it keeps the problem analyzable.

Example of a nonlinear switching surface

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

44
Consider the system governed by the equation:

For a mechanical system, an analogy would be making a cart reach a given position at
zero velocity in minimal time.

The request for a minimal time solution suggests a bang-bang type of approach.

This can be obtained, for example, with the following expression for s:

The shape of the sliding surface is as shown below.

This corresponds to the following block diagram:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

45
Logic is missing for the case when s is exactly equal to zero. In practice for a
continuous system such as that shown above this case is never reached.
Classical Phase-Plane Analysis Examples

Reference: GM Chapter 7

Example: Position control servo (rotational)

Case 1: Effect of dry friction

The governing equation is as follows:

For simplicity and without lack of generality, assume that I = 1. Then:

That yields:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

46

The friction function is given by:

There are an infinite number of singular points, as shown below:

When , we have , that is, we have an undamped linear oscillation
( a center). Similarly, when , we have (another center).

From a controls perspective, dry friction results in an offset, that is, a loss of static
accuracy.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

47
To get the accuracy back, it is possible to introduce dither into the system. Dither is a
high-frequency, low-amplitude disturbance (an analogy would be tapping an offset scale

On average, the effect of dither pulls you in. Dither is a linearizing agent, that
transforms Coulomb friction into viscous friction.

Example: Servo with saturation

There are three different zones created by the saturation function:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

48

The effects of saturation do not look destabilizing. However, saturation affects the
performance by slowing it down.

The effect of saturation is to slow down the system.

Note that we are assuming here that the system was stable to start with before we applied
saturation.

Problems appear if one is not operating in the linear region, which indicates that the gain
should be reduced in the saturated region.

If you increase the gain of a linear system oftentimes it eventually winds up unstable,
except if the root locus looks like:

Root locus for a conditionally stable system (for example an inverted pendulum).

So there are systems for which saturation will make you unstable.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

49
SUMMARY: Second-Order Systems and Phase-Plane Analysis

Graphical Study of Second-Order Autonomous Systems

x
1
and x
2
are states of the system
p and q are nonlinear functions of the states

phase plane = plane having x
1
and x
2
as coordinates
! get rid of time

As t goes from 0 ! +", and given some initial conditions, the solution x(t) can be
represented geometrically as a curve (a trajectory) in the phase plane. The family of
phase-plane trajectories corresponding to all possible initial conditions is called the phase
portrait.

Due to Henri Poincar

French mathematician, (1854-1912).

Main contributions:
! Algebraic topology
! Differential Equations
! Theory of complex variables
! Orbits and Gravitation
! http://www-history.mcs.st-andrews.ac.uk/history/Mathematicians/Poincare.html

Poincar conjecture
In 1904 Poincar conjectured that any closed 3-dimensional manifold which is homotopy
equivalent to the 3-sphere must be the 3-sphere. Although higher-dimensional analogues
of this conjecture have been proved, the original conjecture remains open.
Equilibrium (singular point)

Singular point = equilibrium point in the phase plane

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

50

Slope of the phase trajectory

At an equilibrium point, the value of the slope is indeterminate (0/0) ! singular point.

Investigate the linear behaviour about a singular point

Set

Then

Which is the general form of a second-order linear system.

Obtain the characteristic equation

!
"
1,2
=
a + d
2

(a + d)
2
2

Possible cases

Pictures are from H. Khalil, Nonlinear Systems, Second Edition.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

51

#
1
and #
2
are real and negative

STABLE NODE

#
1
and #
2
are real and positive

UNSTABLE NODE

#
1
and #
2
are real and of opposite sign

#
1
and #
2
are complex with negative real
parts

STABLE FOCUS

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

52
#
1
and #
2
are complex with positive real
parts

UNSTABLE FOCUS

#
1
and #
2
are complex with zero real parts

CENTER

Which direction do circles and spirals spin, and what does this mean?

Consider the system:

Let and .

With \$ page of straightforward algebra, one can show that: (see homework 1 for details)

and

The r equation says that in a Jordan block, the diagonal element, %, determines whether
the equilibrium is stable. Since r is always non-negative, % greater than zero gives a
growing radius (unstable), while % less than zero gives a shrinking radius. & gives the rate
and direction of rotation, but has no effect on stability. For a given physical system,
simply re-assigning the states can get either positive or negative &.

In summary:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

53
If % > 0, the phase plot spirals outwards.
If % < 0, the phase plot spirals inwards.

If & > 0, the arrows on the phase plot are clockwise.
If & < 0, the arrows on the phase plot are counter-clockwise.

Stability

x=xe+'x

Lyapunov proved that the eigenvalues of A indicate local stability if:

(a) the linear terms dominate, that is:

(b) there are no eigenvalues with zero real part.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

54

4 `Equilibrium Finding

We consider systems that can be written in the following general form, where x is the
state of the system, u is the control input, and f is a nonlinear function.

Let u = u
e
= constant.

At an equilibrium point, .

Key points

Nonlinear systems may have a number of equilibrium points (from zero to
infinity). These are obtained from the solution of n algebraic equations in n
unknowns.
The global implicit function theorem states condition for uniqueness of an
equilibrium point.
Numeral solutions to obtain the equilibrium points can be obtained using several
methods, including (but not limited to) the method of Newton-Raphson and
steepest descent techniques.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

55
To obtain the equilibrium points, one has to solve n algebraic equations in n unknowns.

How can we find out if an equilibrium point is unique? See next section.

Global Implicit Function Theorem

Define the Jacobian of f.

The solution x
e
of for a fixed ue is unique provided:

1. det[J(x)] ! 0 for all x
2.

Note: in general these two conditions are hard to evaluate (particularly condition 1).

For peace of mind, check this with linear system theory. Suppose we had a linear system:

. Is x
e
unique? J=A, which is different from 0 for all x, and f = Ax, so the limit
condition is true as well (good!).

How does one generate numerical solutions to ? (for a fixed ue)

There are many methods to find numerical solutions to this equation, including, but not
limited to:
- Random search methods
- Methods that require analytical gradients (best)
- Methods that compute numerical gradients (easiest)

Two popular ways of computing numerical gradients include:
- The method of Newton-Raphson
- The steepest descent method
Usually both methods are combined.

The method of Newton-Raphson

We want to find solutions to the equation . We have a value, x
i
, at the i
th

iteration and an error, e
i
, such that e
i
= f(x
i
).
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

56
We want an iteration algorithm so that:

Expand in a first order Taylor series expansion.

We have: .

Then:

That is, we get an expression for the Newton-Raphson iteration:

Note: One needs to evaluate (OK) and invert (not so good) the Jacobian.
Note: Leads to good convergence properties close to x
e
but causes extreme starting
errors.

Steepest Descent Technique (Hill Climbing)

Define a scalar function of the error, then choose to guarantee a reduction in this
scalar at each step.

Define: which is a guaranteed positive scalar. We attempt to minimize L.

We expand L in a first-order Taylor series expansion.

and

We want to impose the condition: L(i+1) < L(i).

This implies:

where ! is a scalar.

This yields:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

57

and

That is, the steepest descent iteration is given by:

Note: Need to evaluate J but not invert it (good).
Note: this has good starting properties but poor convergence properties.

Note: Usually, the method of Newton-Raphson and the steepest descent method are
combined:

where "
1
and "
2
are variable weights.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

78

6 `Controllability and
Observability of
Nonlinear Systems

Controllability for Nonlinear Systems

The Use of Lie Brackets: Definition

We shall call a vector function
!
f : "
n
#"
n
a vector field in !
n
to be consistent with
terminology used in differential geometry. The intuitive reason for this term is that to
every vector function f corresponds a field of vectors in an n-dimensional space (one can
think of a vector f(x) emanating from every point x). In the following we shall only be
interested in smooth vector fields/ By smoothness of a vector field, we mean that the
function f(x) has continuous partial derivatives of any required order.

Key points

Nonlinear observability is intimately tied to the Lie derivative. The Lie
derivative is the derivative of a scalar function along a vector field.
Nonlinear controllability is intimately tied to the Lie bracket. The Lie bracket
can be thought of as the derivative of a vector field with respect to another.
References
o Slotine and Li, section 6.2 (easiest)
o Sastry, chapter 11 pages 510-516, section 3.9 and chapter 8
o Isidori, chapter 1 and appendix A (hard)

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

79

Consider two vector fields f(x) and g(x) on !
n
. Then the Lie bracket operation generates
a new vector field defined by:

The Lie bracket is [f,g] is commonly written ad
f

Also, higher order Lie brackets can be defined recursively:

!
f
0
, g) " g
!
f
1
, g) " f , g [ ]
!
f
2
, g) " f , f , g [ ] [ ]

!
f
k
f
k#1
, g)
[ ]
for k=1,2,3,

Recap Controllability for Linear Systems

!
C = B | AB | ... | A
n"1
B
[ ]

Local conditions (linear systems)

Let u = constant (otherwise no pb, but you get etc)

For linear systems, you get nothing new after the nth derivative because of the Cayley-
Hamilton theorem.

Re-writing controllability conditions for linear systems using this notation:

,

!
x = A x = A
2
x + AB
i
u
i
i=1
m
"
= A
2
x # f , B
i
[ ]
i=1
m
"
u
i

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

80

,

So for example:

If we keep going:

!
x = A
3
x + A
2
B
i
u
i
= A
3
f
2
B
i [ ]
u
i
i=1
m
"
i=1
m
"

Notice how this time the minus signs cancel out.

!
x
(n)
=
d
n
x
dt
n
= A
n
x + A
n"1
B
i
u
i
= A
n
x + ("1)
n"1
f
n"1
B
i [ ]
u
i
i=1
m
#
i=1
m
#

Re-writing the controllability condition:

!
C = B
1
,..., B
m
f
B
1
f
B
m
f
n"1
B
1
f
n"1
B
m
[ ]

The condition has not changed just the notation.
The terms B
1
through B
m
correspond to the B term in the original matrix, the terms with
f
correspond to the AB terms, the terms with ad
f
n-1
correspond to the A
n-1
B terms.

Nonlinear Systems

Assume we have an affine system:

The general case is much more involved and is given in Hermann and Krener.
If we dont have an affine system, we can sometimes ruse:

Let
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

81
Select a new state: and v is my control " the system is affine in (z,v), and pick #
to be OK.

Theorem

The system defined by:

0
if the accessibility distribution C spans n space, where n is
the dimension of x and C is defined by:

!
C = g
1
, g
2
,..., g
m
, g
i
, g
j
[ ]
g
i
k
g
j
,..., f , g
i
[ ]
f
k
g
i
,...
[ ]

The g
i
terms are analogous to the B terms, the [g
i
,g
j
] terms are new from having a
nonlinear system, the [f,g
i
] terms correspond to the AB terms, etc

Note: if f(x) = 0 then and if in this case C has rank n, then the system is
controllable.

Example: Kinematics of an Axle

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

82
Basically, \$ is the yaw angle of the vehicle, and x
1
and x
2
are the Cartesian locations of
the wheels. u
1
is the velocity of the front wheels, in the direction that they are pointing,
and u
2
is the steering velocity.

We define our state vector to be:

Our dynamics are:

The system is of the form:

f(x) = 0, and

Note:

If I linearize a nonlinear system about x
0
and the linearization is controllable, then the
nonlinear system is accessible at x
0
(not true the other way if the linearization is
uncontrollable the nonlinear system may still be locally accessible).

Back to the example:

where and in our case,

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

83

So

C has rank 3 everywhere, so the system is locally accessible everywhere, and f(x)=0 (free
dynamics system) so the system is controllable!

Example 2:

Note: if I had the linear system:

, , "

and the linear system is controllable.

Back to the example 2:

Is the nonlinear system controllable? Answer is NO, because x
1
can only increase.

But lets show it.

In standard form:

,

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

84
So

Accessible everywhere except where x
2
=0

If we tried [f,[f,g]], would we pick up new directions? It turns out they will also be
dependent on x
2
, and the rank will drop at x
2
= 0.

Example 3:

where

The system is of the form:

where

, and
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

85

We have:

If C has rank 4, then the system is locally accessible. Have fun

Observability for Nonlinear Systems

Intuition for observability:
From observing the sensor(s) for a finite period of time, can I find the state at previous
times?

Review of Linear Systems

where
where and p<n

(linear system)

z(t), 0%t%T
Can I determine x
0
?

where M is pxn, e
AT
is nxn and x
0
is nx1, so z(t) is px1

Using the Cayley-Hamilton theorem:

Note: the Cayley-Hamilton theorem applies to time-varying matrices as well.

So, I have:

!
z(t) = "
0
(t)M +"
1
(t)MA+... +"
n#1
(t)MA
n#1
{ }
x
0

So I can solve for x
0
iff the matrix O spans n space, where:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

86

This does not carry over to nonlinear systems, so we take a local approach.

Local Approach to Observability (Linear Systems)

v(t) is the measurement noise, can cause problems.

!
z
(n"1)
= MA
n"1
x

" O must have rank n

Lie Derivatives:

The gradient of a smooth scalar function h(x) of the state x is denoted by:

!
"h =
#h
#x

The gradient is represented by a row-vector of elements:
!
("h)
j
=
#h
#x
j
.

Similarly, given the vector field f(x), the Jacobian of f is:

!
"f =
#f
#x

It is represented by an nxn matrix of elements:
!
("f )
ij
=
#f
i
#x
j

Definition

Let f: !
n
&!
n
be a vector field in !
n
.
Let h: !
n
&! be a smooth scalar function.

Then the Lie derivative of h with respect to f is a new scalar defined by:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

87

Dimensions

f looks like:
h looks like: h(x) with x '!
n
" associates a scalar to each point in !
n

The Lie derivative looks like:

" L
f
h is a scalar.

Conventions:

By definition,

We can also define higher-order Lie derivatives:

etc

One can easily see the relevance of Lie derivatives to dynamic systems by considering
the following single-output system:

!
x = f (x)
y = h(x)

Then

And

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

88
Etc so

Use of Lie Derivative Notation for Linear Systems

so f(x)=Ax

, M
i
is 1xn

"

Define
!
G =
L
f
0
(h
1
) ... L
f
0
(h
p
)
... ... ...
L
f
n"1
(h
1
) ... L
f
n"1
(h
p
)
#
\$
%
%
%
&
'
(
(
(
=
M
1
x ... M
p
x
... ... ...
M
1
A
n"1
x ... M
p
A
n"1
x
#
\$
%
%
%
&
'
(
(
(

O must have rank n for the system to be observable.

Nonlinear Systems

Theorem:

Let G denote the set of all finite linear combinations of the Lie derivatives of h
1
,,h
p

with respect to f for various values of u = constant. Let dG denote the set of all their
gradients. If we can find n linearly independent vectors within dG, then the system is
locally observable.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

89
The system is locally observable, that is distinguishable at a point x
0
if there exists a
neighborhood of x
0
such that in this neighborhood,

if the states are different, the sensor readings are different

Case of a single measurement:

Look at the derivatives of z:

Let:

Expand in a first-order series about x
0
for u = u
0

Then must have rank n

Example:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

90

The question we are trying to answer is: from observation, does z contain enough
information on x
1
and x
2
?
Since x
1
= z, (by substitution in the first line)
If z = x
2
, then the system would not be distinguishable, since

But if you take one more derivative you can get a single valued expression since has
only one solution.

!
z = 2x
1
x
1
= 2x
1
x
1
2
2
+exp(x
2
) + x
2
"
#
\$
%
&
'

Rank test for z=h= x
1

has rank 2 everywhere

" the system is locally observable everywhere.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

91
SUMMARY: Controllability and Observability for Nonlinear Systems

Controllability

The system is locally accessible about a point x
0
if and only if

C = [ g
1
,...,g
m
, [g
i
, g
j
gi
k
,g
j
],..., [f,g
i
f
k
,g
i
],...]

has rank n where n is the rank of x. C is the accessibility distribution.

If the system has the form: that is, f(x) = 0, and C has rank n, then the
system is controllable.

Observability

z=h(x)

Two states x
0
and x
1
are distinguishable if there exists an input function u* such that:
z(x
0
) ( z(x
1
)

The system is locally observable at x
0
if there exists a neighbourhood of x
0
such that
every x in that neighbourhood other than x
0
is distinguishable from x
0.

A test for local observability is that:

must have rank n, where n is the rank of x and

For a px1 vector,
z = [h
1
, ..., h
p
]
T
,

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

92

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

93

LINEAR SYSTEMS

NONLINEAR SYSTEMS

CONTROLLABILITY
AND
ACCESSIBILITY

Intuition: the system is
controllable " you can get
anywhere you want in a finite
amount of time.

LINEAR TIME INVARIANT
SYSTEMS

CONTROLLABILITY

The system is controllable if:
C = [ B AB ... A
n-1
B ]
has rank n, where n is the rank of
x.

AFFINE SYSTEMS

ACCESSIBILITY

The system is locally accessible
0
if and only if

C = [ g
1
,...,g
m
, [g
i
, g
j
],...
gi
k
g
j
],..., [f,g
i
f
k
g
i
],...]

has rank n where n is the rank of
x. C is the accessibility
distribution.

CONTROLLABILITY

If f(x) = 0 and C has rank n, then
the system is controllable.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

94

OBSERVABILITY
AND DISTINGUISHABILITY

Intuition: the system is observable
" from observing the sensor
measurements for a finite period
of time, I can obtain the state at
previous times.

z=Mx

x has rank n
z has rank p
p<n

OBSERVABILITY

The system is observable if:

has rank n, where n is the rank of
x.

z=h(x)

DISTINGUISHABILITY

Two states x
0
and x
1
are
distinguishable if there exists an
input function u* such that:
z(x
0
) ( z(x
1
)

LOCAL OBSERVABILITY

The system is locally observable
at x
0
if there exists a
neighbourhood of x
0
such that
every x in that neighbourhood
other than x
0
is distinguishable
from x
0.

A test for local observability is
that:

must have rank n, where n is the
rank of x and

For a px1 vector,
z = [h
1
, ..., h
p
]
T
,

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

95
Remarks

In general the conditions for nonlinear systems are weaker than those for linear
systems. Properties for nonlinear systems tend to be local.

What to do for nonlinear controllability if the system is not in affine form?

Let

"
!
z =
x
u
"
#
\$
%
&
'
, v is my control " the system is now affine in (z,v) and pick # to be OK.

Marius Sophus Lie

Born: 17 Dec 1842 in Nordfjordeide, Norway
Died: 18 Feb 1899 in Kristiania (now Oslo), Norway

The Lie Derivative and Observability

Definition

Let f: !
n
&!
n
be a vector field in !
n
.
Let h: !
n
&! be a smooth scalar function.

Then the Lie derivative of h with respect to f is:

Dimensions

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

96
f looks like:
h looks like: h(x) with x '!
n
" associates a scalar to each point in !
n

The Lie derivative looks like:

" L
f
h is a scalar.

Physically (time for pictures!)

Picture of f
f associates an n-dimensional vector to each point in !
n

In !
2
:

For example, let
!
f (x) =
"1 0
0 "2
#
\$
%
&
'
(
x
1
x
2
#
\$
%
&
'
(
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

97

)
f
t
(x
0
) = flow along the vector field for time t, starting at x
0

" tangent to the phase plane plot at every single point

Picture of h

For example, in !
2
, pick h to be the distance to the origin:

Lie derivative picture

Using this example:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

98
!
L
f
h =
" x
2
"x
1
" x
2
"x
2
#
\$
%
&
'
(
)1 0
0 )2
#
\$
%
&
'
(

So, the Lie derivative gives the rate of change in a scalar function h as one flows
along the vector field f.

In a control systems context:

x'!
n
f: !
n
&!
n

y=h(x) y'!

h: !
n
&!

along the flow of f

How does this tie into observability?

Imagine:
!
x = Ax
y = Cx
"
#
\$
and we can only see y, a scalar, and we wish to find x'!
n

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

99

y = Cx
y = Cx = CAx

y
(n-1)
= CA
n-1
x

and solve for x (n equations)
" if [C CA ... CA
n-1
] has rank n, we have n independent equations in n
variables " OK

Using the Lie derivative

f(x) = Ax, h(x) = Cx

and by convention,

The Lie Bracket and Controllability

Definition

Let f: !
n
&!
n
be a smooth vector field in !
n
.
Let g: !
n
&!
n
be a smooth vector field in !
n
.

Then the Lie bracket of f and g is a third-order vector field given by:

!
f , g [ ] =
"g
"x
. f #
"f
"x
.g

Dimensions

f looks like: , g also looks like:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

100

So [f,g] is a vector field.

How does this tie into controllability?

Consider:
u
1
, u
2
are scalar inputs

x'!
3

,

What directions can we steer x if we start at some point x
0
?

Clearly, we can move anywhere in the span of {g
1
(x
0
),g
2
(x
0
)}.

Lets say that: ,

Can we move in the x
3
direction?

The directions that we are allowed to move in by infinitesimally small changes
are [g
1
,g
2
].

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

151

8 `Feedback Linearization

Key points
Feedback linearization = ways of transforming original system models into
equivalent models of a simpler form.
Completely different from conventional (Jacobian) linearization, because
feedback linearization is achieved by exact state transformation and feedback,
rather than by linear approximations of the dynamics.
Input-Output, Input-State
Internal dynamics, zero dynamics, linearized zero dynamics
Jacobis identity, the theorem of Frobenius
MIMO feedback linearization is also possible.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

152

Feedback linearization is an approach to nonlinear control design that has attracted lots of
research in recent years. The central idea is to algebraically transform nonlinear systems
dynamics into (fully or partly) linear ones, so that linear control techniques can be
applied.

This differs entirely from conventional (Jacobian) linearization, because feedback
linearization is achieved by exact state transformation and feedback, rather than by linear
approximations of the dynamics.

The basic idea of simplifying the form of a system by choosing a different state
representation is not completely unfamiliar; rather it is similar to the choice of reference
frames or coordinate systems in mechanics.

Feedback linearization = ways of transforming original system models into
equivalent models of a simpler form.

Applications: helicopters, high-performance aircraft, industrial robots, biomedical
devices, vehicle control.
Warning: there are a number of shortcomings and limitations associated with the
feedback linearization approach. These problems are very much topics of current
research.

References: Sastry, Slotine and Li, Isidori, Nijmeijer and van der Schaft

Terminology

Feedback Linearization

A catch-all term which refers to control techniques where the input is used to linearize
all or part of the systems differential equations.

Input/Output Linearization

A control technique where the output y of the dynamic system is differentiated until the
physical input u appears in the r
th
derivative of y. Then u is chosen to yield a transfer
function from the synthetic input, v, to the output y which is:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

153
If r, the relative degree, is less than n, the order of the system, then there will be internal
dynamics. If r = n, then I/O and I/S linearizations are the same.

Input/State Linearization

A control technique where some new output y
new
= h
new
(x) is chosen so that with respect
to y
new
, the relative degree of the system is n. Then the design procedure using this new
output y
new
is the same as for I/O linearization.

SISO Systems

Consider a SISO nonlinear system:

Here, u and y are scalars.

!
y =
"h
"x
x = L
f
1
h + L
g
(h)u = L
f
1
h if L
g
(h) = 0

If , we keep taking derivatives of y until the output u appears. If the output
doesnt appear, then u does not affect the output! (Big difficulties ahead).

If , we keep going.

We end up with the following set of equalities:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

154

with
with

with

The letter r designates the relative degree of y=h(x) iff:

That is, r is the smallest integer for which the coefficient of u is non-zero over the space
where we want to control the system.

Lets set:

Then , where

v(x) is called the synthetic input or synthetic control. y
(r)
=v

We have an r-integrator linear system, of the form: .

We can now design a controller for this system, using any linear controller design
method. We have . The controller that is implemented is obtained through:

Any linear method can be used to design v. For example,

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

155

Problems with this approach:

1. Requires a perfect model, with perfect derivatives (one can anticipate robustness
problems).
2. If the goal is , .
If , and r = 2, there are 18 states for which we dont know what is
happening. That is, if , we have internal dynamics.

Note: There is an ad-hoc approach to the robustness problem, by setting:

Here the first term in the expression is the standard feedback linearization term, and the
second term is tuned online for robustness.

Internal Dynamics

Assume r<n ! there are some internal dynamics

where

So we can write:

where A and B are in controllable canonical form, that is:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

156

where and

We define:

where z is rx1 and " is (n-r)x1. ( ).

The normal forms theorem tells us that there exists an " such that:

Note that the internal dynamics are not a function of u.

So we have:

The " equation represents internal dynamics; these are not observable because z does
not depend on " at all ! internal, and hard to analyze!

We want to analyze the zero dynamics. The system is difficult to analyze. Oftentimes, to
make our lives easier, we analyze the so-called zero dynamics:

and in most cases we even look at the linearized zero dynamics.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

157
and we look at the eigenvalues of J.

If these are well behaved, perhaps the nonlinear dynamics might be well-behaved. If
these are not well behaved, the control may not be acceptable!

For linear systems:

We have:

The eigenvalues of the zero dynamics are the zeroes of H(s). Therefore if the zeroes of
H(s) are non-minimum phase (in the right-half plane) then the zero dynamics are
unstable.
#

By analogy, for nonlinear systems: if is unstable, then the system:

is called a non-minimum phase nonlinear system.

Input/Output Linearization

o Procedure

a) Differentiate y until u appears in one of the equations for the derivatives of y

after r steps, u appears

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

158
b) Choose u to give y
(r)
=v, where v is the synthetic input

c) Then the system has the form:

Design a linear control law for this r-integrator liner system.

d) Check internal dynamics.

o Example

Oral exam question

Design an I/O linearizing controller so that y \$ 0 for the plant:

a) u appears ! r = 1

b) Choose u so that

!

In our case, and .

c) Choose a control law for the r-integrator system, for example proportional control

Goal: to send y to zero exponentially

! since y
des
= 0
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

159

d) Check internal dynamics:

Closed loop system:

If x
1
\$ 0 as desired, x
2
is governed by
! Unstable internal dynamics!

There are two possible approaches when faced with this problem:

! Try and redefine the output: y=h(x
1
,x
2
)
! Try to linearize the entire system/space ! Input/State Linearization

#

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

160

Input/State Linearization (SISO Systems)

Question: does there exist a transformation !(x) such that the transformed system is
linear?

Define the transformed states:

I want to find %(x) such that where , with:
! v=v(x,u) is the synthetic control
! the system is in Brunowski (controllable) form

and

A is nxn and B is nx1.

We want a 1 to 1 correspondence between z and x such that:

Question: does there exist an output y=z
1
(x) such that y has relative degree n?

with
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

161

Let

Then: . And the form I need is:

" does there exist a scalar z
1
(x) such that:

for k = 1,,n-2
And ?

!
z "
z
1
z
2
...
z
n
#
\$
%
%
%
%
&
'
(
(
(
(
=
L
f
0
(z
1
)
L
f
1
(z
1
)
...
L
f
n)1
(z
1
)
#
\$
%
%
%
%
&
'
(
(
(
(

" is there a test?

so the test should depend on f and g.

Jacobis identity

Carl Gustav Jacob Jacobi

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

162
Born: 10 Dec 1804 in Potsdam, Prussia (now Germany)
Died: 18 Feb 1851 in Berlin, Germany

Famous for his work on:
! Orbits and gravitation
! General relativity
! Matrices and determinants

Jacobis Identity

A convenient relationship (S+L) is called Jacobis identity.

Remember:
!
L
f
0
h = h,
!
L
f
i
h = L
f
(L
f
i"1
h) = #(L
f
i"1
h). f
!
f
0
g = g.
!
f
g = [ f , g] = "g. f #"f .g,
!
f
i
g = [ f , ad
f
i"1
g]

This identity allows us to keep the conditions in first order in z
1

" Trod through messy algebra

! For k = 0

(first order)

! For k = 1

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

163
! 2
nd

Things get messy, but by repeated use of Jacobis identity, we have:

for for (*)

The two conditions above are equivalent. Evaluating the second half:

This leads to conditions of the type:

The Theorem of Frobenius

Ferdinand Georg Frobenius:

Born: 26 Oct 1849 in Berlin-Charlottenburg, Prussia (now Germany)
Died: 3 Aug 1917 in Berlin, Germany
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

164

Famous for his work on:
! Group theory
! Fundamental theorem of algebra
! Matrices and determinants

Theorem of Frobenius:

A solution to the set of partial differential equations for exists
if and only if:

a)
!
f
f
n"1
g
[ ]
has rank n

b)
!
f
f
n"2
g
[ ]
is involutive
#

Definition of involutive:

A linear independent set of vectors (f
1
, , f
m
) is involutive if:

#
i.e. when you take Lie brackets you dont generate new vectors.

Note: this is VERY hard to do.
Reference: George Myers at NASA Ames, in the context of helicopter control.

Example: (same as above)

Question: does there exist a scalar z
1
(x
1
,x
2
) such that the relative degree be 2?

This will be true if:
a) (g, [f, g]) has rank 2
b) g is involutive (any Lie bracket on g is zero \$ OK)
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

165

Setting stuff up to look at (a):

Note:

looks dangerous

Question: how do we find z1?

We get a list of conditions:

!
(simplest)

! (always true for constant, independent vectors) In our case, ok if
!
x
1
" 3 3

So lets trod through and check:

(good that u doesnt appear, or r=1!)

(u appears! (good))

Define ,

Hope the problem is far away from

Let

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

166

! z
1
\$ z
1d

Question: How to pick z
1d
?

We want: for

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

167

Feedback Linearization for MIMO Nonlinear Systems

Consider a square system (where the number of inputs is equal to the number of
outputs = n)

Let r
k
, the relative degree, be defined as the relative degree of each output, i.e.

For some i,

Let J(x) be an mxm matrix such that:

J(x) is called the invertibility or decoupling matrix.

We will assume that J(x) is non-singular.

Let:

where y
r
is an mx1 vector

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

168
Then we have:

where v is the synthetic input (v is mx1).

We obtain a decoupled set of equations:

so

Design v any way you want to using linear techniques

Problems:

! Need confidence in the model
! Internal dynamics

Internal Dynamics

The linear subspace has dimension (or relative degree) for the whole system:

! we have internal dynamics of order n-r
T
.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

169

The superscript notation denotes which output we are considering. We have:

where z
T
is rx1, "
T
is (n-r
T
)x1

The representation for x may not be unique!

Can we get a " who isnt directly a function of the controls (like for the SISO case)? NO!

and

Internal dynamics ! what is u?
! design v, then solve for u using

The zero dynamics are defined by z = 0.

!

The output is identically equal to zero if we set the control equal to zero (at all times).

Thus the zero dynamics are given by:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

170

Dynamic Extension - Example

References: Slotine and Li
Hauser, PhD Dissertation, UCB, 1989 from which this example is taken

Basically, & is the yaw angle of the vehicle, and x
1
and x
2
are the Cartesian locations of
the wheels. u
1
is the velocity of the front wheels, in the direction that they are pointing,
and u
2
is the steering velocity.

We define our state vector to be:

Our dynamics are:

!
x
1
= (cos")u
1
x
2
= (sin")u
1

" = u
2
#
\$
%
&
%

We determined in a previous lecture that the system is controllable (f = 0).

and are defined as outputs.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

171
!
y
1
y
2
"
#
\$
%
&
'
=
cos( 0
sin( 0
"
#
\$
%
&
'
u
1
u
2
"
#
\$
%
&
'

!
J(x) =
cos" 0
sin" 0
#
\$
%
&
'
(
is clearly singular (has rank 1).

Let , where u
3
is the acceleration of the axle

! the state has been extended.
!

!
x
1
= (cos")x
3
x
2
= (sin")x
3
x
3
= u
1
= u
3

" = u
2
#
\$
%
%
&
%
%

where in the extended state space

Take and .

and the new J(x) matrix: is non-singular for u
1
' 0 (as long as
the axle is moving).

How does one go about designing a controller for this example?

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

172

Given y
1d
(t), y
2d
(t):

Let:

!

To obtain the control, u:

and ! we have a dynamic feedback controller (the controller has dynamics, not
just gains, in it).
#

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

173

Pictures for SISO cases:

o Picture of I/O system (r = 1)

o In general terms

n
th
order
r = relative degree < n

a) Differentiate:

and if r>1

!
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

174

and =0 if r<2

!

where

b) Choose u in terms of v

Let

For now, to simplify the pictures, let

c) Choose control law

d) Check internal dynamics
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

175

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

176
Feedback Linearization and State Transformation

We have an n
th
order system where y is the natural output, with relative degree r.

Previously, we skimmed over the state transformation interpretation of feedback
linearization.

Why do we transform the states?

The differential equations governing the new states have some convenient properties.

Example:

Consider a linear system

The points in 2-space are usually expressed in the natural basis: .
So when we write x, we mean a point in 2-space that is gotten to from the origin by
doing:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

177

where (x
1
,x
2
) are the coordinates of a point in in the natural basis.

To diagonalize the system, we do a change of coordinates, so we express points like:

!
1 0
0 1
"
#
\$
%
&
'
x = t
1
t
2
[ ]x'

where t
1
and t
2
are the eigenvectors of A and x represents the coordinates in the new
basis x.

So we get a nice equation in the new coordinates:

where (is diagonal.

For I/O linearization, we do the same kind of thing:

We seek some nonlinear transformation so that the new state, x, is governed by
differential equations such that the first r-1 states are a string of integrators (derivatives of
each other), and the differential equation for the r
th
state has the form:

= nonlinear function(x) +u

and n-r internal dynamics states will be decoupled from u (this is a matter of
convenience).
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

178

So we have: x=T(x) where T is nonlinear.

Lets enforce the above properties:

We know how to choose T
1
(x) through T
r
(x). They are just etc

How do we choose the T
r+1
(x) through T
n
(x)?

These transformations need to be chosen so that:

1. The transformation T(x) is a diffeomorphism:

o One to one transformation

o T(x) is continuous
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

179
o T
-1
(x) is continuous
o Also and must exist and be continuous

2. The " states should have no direct dependence on the input u.

Example (from HW)

We know that T
1
(x
1
,x
2
) = y = x
2
.

2
(x
1
,x
2
)?

Choose T
2
(x
1
,x
2
) to satisfy the above conditions. Lets start with condition 2, u does not
appear in the equation for .

We are only concerned about the second term. To eliminate the dependence in u, we must
have:

(T
2
should not depend on x
2
).

An obvious answer is : T
2
(x
1
,x
2
) = x
1
. Then, we would have:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010

180

Is this a diffeomorphism? Obviously yes.

Note that works also.

What about ? (NO violates one-to-one transformation part of the conditions
for a proper diffeomorphism).

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

161

9 `Sliding Mode Control of
Nonlinear Systems

Historically:
- Other terms have been used, most predominantly Variable Structure Systems
(VSS)
- Began in the 1960s in the USSR (Fillipov, Utkin)
- Used in Japan in the 1970s for power systems
- Adopted in the US in the late 1970s and the 1980s, principally for robotics.
Brought over by Utkin, a professor at Ohio State.

Key points

Applicable to nonlinear dynamic systems.
Directly considers robustness issues as part of the design process.
Reference: Slotine and Li chapter 7
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

162
Attributes:
- Applicable to nonlinear dynamic systems
- Directly considers robustness issues as part of the design process.

Second-Order Example

Consider the system governed by the following equation:

where f is a nonlinear function, d represents a time-varying disturbance, and u is the
control input.

We can separate f and d into known parts (denoted by a m subscript, for model), and
unknown parts (denoted by !).

We write the state of the system as
Our goal is to design a controller such that perfectly (asymptotically) and the
error must go to zero exponentially. (Note: this is an ambitious goal).

Define the error as:

Then we can define a sliding surface, S, as:

If S=0, then the error goes to zero exponentially with a time constant 1/". This is
consistent with the controller goals. (also, if we can make S=0 in finite time t
1
, then we
can write out the equation for the error as a function of time: )

Computing the first derivative of S:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

163
The u term appears in the equation for the first derivative of S. We say that S has
relative degree 1. (This will always be a desirable property for S).

We select u to cancel out some of the known terms:

Here the last term will deal with the uncertainties.

This results in:

Question: How can I make S#0 in finite time?

I need bounds on the uncertainties, !f and !d. For example:

The next thing to do is to select a Lyapunov function candidate. For example, we may
select , which leads to . That is, to make negative definite, we
need to pick u so that .

As before, let

Then:

We now consider the worst-case uncertainty:

and we need this quantity to be <0

where \$ is a tuning parameter.

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

164
This choice of control input guarantees that:

The full expression for the control input, u, is:

Suppose we encounter the worst-case scenario. Then:

Let S(0)=0. Then,

In general,

So,

This last equation is referred to as the sliding condition.

It indicates that S(t) will reach zero in a finite time .

After t
1
, we enter the sliding mode, and the system chatters with zero amplitude and
infinite frequency on the average.

When in the sliding mode,

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

165

Notes:
(a) We get perfect disturbance rejection and robustness to model error
(b) This is a Lyapunov-based controller design. Equilibrium point is guaranteed to be
stable.

Basic idea of sliding mode:

First-order systems are much easier to control than higher-order ones.

Simple example (single integrator, hard to screw up!)

Goal is: y ! 0

For example this is applicable to velocity control via force.

Control law:

We can use any gain we please for k without instability.
Specifically, we can use, , which looks like close to x = 0.

Root locus: poles of the following system as a function of the gain k:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

166

In our case:

Note that at infinity gain (sgn(x) case), the closed loop system is stable.

Double integrator example

This corresponds to, for example, position control via force.

Goal is: y ! 0

Control law: we must be more careful while choosing u:

For example, say we pick . The root locus will look like:

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

167

has extremely oscillatory poles

% Simplistic control law does not work.

Triple integrator example

Goal is: y ! 0

Again, lets try .

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

168
Relation to sliding mode control

Moral of the story:

First-order systems are trivial to control: if output is too high, push down if output is
too low, push down.

Idea of sliding mode control:

Reduce every system to a first order system that we are trying to force to zero. Then we
can use the intuitive control law described above, and furthermore, we can use ANY gain
we want, including infinity (perfect tracking, perfect disturbance rejection).

How do we do the reduction?

We define a new output, s, which looks like a first-order system.

What properties should s have?

(a) The relative degree of the output s should be 1, that is u should appear
explicitly in the expression for .

(b) s # 0 is the control goal, that is s needs to be designed so that good things happen
to the physical output when s # 0.

Sliding mode control for the single integrator

Goal is: y ! 0

Trivial: s = x and check the two above-mentioned conditions:

(i) Relative degree: OK
(ii) Is s # 0 a good thing? Yes!

Sliding mode control for the double integrator

Goal is: y ! 0

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

169

Finding s is a little harder. Lets use the conditions!

(i) s(x
1
,x
2
) % s must have x
2
in it
(ii) s # 0 must be a good thing

Is s = x
2
a good thing? NO!

Lets try: where ">0. This is good!

exponentially (">0).

So we use: , with ">0.

Look at the first order control problem:

Choose:

The term cancels out that term in the expression for , and v is similar to a
synthetic input.

We are now faced once again with a familiar first-order control problem.

We let: %

Picture (for the case of "=1)

is a line of slope 1 ( )

General trick: if the system is of order n, in general the sliding surface has degree n-1.

In our case, so we go to the line at one level set per step

Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

170

Sliding mode control for the triple integrator

Goal is: y ! 0

Once again, we develop an expression for s using the two conditions mentioned above.

(i) s(x
1
,x
2
,x
3
) % s must have x
3
in it
(ii) s # 0 must be a good thing

s(x
1
,x
2
,x
3
)=0
We need to involve x
1
and x
2
or we wont go anywhere.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005

171
A good thing to do is to design s so that s = 0 % x
1
# 0 exponentially.

For example, lets consider:

Shorthand notation:

!
S " s + # ( )
2
x
1
where s is a Laplace differentiating operator.

This is identical to: with n = 2.

Dynamics of the first-order system in s:

is the first order system to control

For example, we use:

In this expression, the CE(x) term cancels the x terms that represent system dynamics.

For example in our case, one can pick:

These results can also be obtained using Lyapunov functions and arguments:

We want to include a sgn(x) term to make %

For More General Systems

Appendix A: Mathematical Background