Anda di halaman 1dari 7

# Control Systems/State-Space Stability The Wikibook of:
Table of
All Versions
Contents
Control
Systems
← Stability
Discrete Time Stability
Glossary
and Control Engineering

## Contents

 ∑ o o ∑ ∑ ∑ ∑

o

##  State-Space Stability

If a system is represented in the state-space domain, it doesn't make sense to convert that system to a transfer function representation (or even a transfer matrix representation) in an attempt to use any of the previous stability methods. Luckily, there are other analysis methods that can be used with the state-space representation to determine if a system is stable or not. First, let us first introduce the notion of unstability:

Unstable A system is said to be unstable if the system response approaches infinity as time approaches infinity. If our system is G(t), then, we can say a system is unstable if: Control Systems The Wikibook of: Table of All Versions PDF Version Contents Control Systems ← Stability Discrete Time Stability → Glossary and Control Engineering Contents [ hide ]1 State-Space Stability o o 1.1 Stability Definitions 1.2 Marginal Stability2 Eigenvalues and Poles3 Impulse Response Matrix4 Positive Definiteness5 Lyapunov Stability o 5.1 Lyapunov's Equation [ edit ] State-Space Stability If a system is represented in the state-space domain, it doesn't make sense to convert that system to a transfer function representation (or even a transfer matrix representation) in an attempt to use any of the previous stability methods. Luckily, there are other analysis methods that can be used with the state-space representation to determine if a system is stable or not. First, let us first introduce the notion of unstability: Unstable A system is said to be unstable if the system response approaches infinity as time approaches infinity. If our system is G(t), then, we can say a system is unstable if: Also, a key concept when we are talking about stability of systems is the concept of an equilibrium point : Equilibrium Point Given a system f such that: x '( t ) = f ( x ( t )) " id="pdf-obj-0-68" src="pdf-obj-0-68.jpg">

Also, a key concept when we are talking about stability of systems is the concept of an equilibrium point:

Equilibrium Point Given a system f such that:

x'(t) = f(x(t))

A particular state x e is called an equilibrium point if

f(x e ) = 0

for all time t in the interval [ edit ] Stability Definitions Here, we will discuss some basic theorems about stability, and define a few key concepts. The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t , there exists an associated finite constant k (t ) such that: Where sup is the supremum , or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point). Uniform Stability The system is defined to be uniformly stable if it is stable for all initial values of t : Uniform stability is a more general, and more powerful form of stability then was previously provided. Asymptotic Stability " id="pdf-obj-1-19" src="pdf-obj-1-19.jpg">

, where t 0 is the starting time of the system.

An equilibrium point is also known as a "stationary point", a "critical point", a "singular point", or a "rest state" in other books or literature.

The definitions below typically require that the equilibrium point be zero. If we have an equilibrium point x e = a, then we can use the following change of variables to make the equilibrium point zero: [ edit ] Stability Definitions Here, we will discuss some basic theorems about stability, and define a few key concepts. The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t , there exists an associated finite constant k (t ) such that: Where sup is the supremum , or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point). Uniform Stability The system is defined to be uniformly stable if it is stable for all initial values of t : Uniform stability is a more general, and more powerful form of stability then was previously provided. Asymptotic Stability " id="pdf-obj-1-33" src="pdf-obj-1-33.jpg">

We will also see below that a system's stability is defined in terms of an equilibrium point. Related to the concept of an equilibrium point is the notion of a zero point:

Zero State A state x z is a zero state if x z = 0. A zero state may or may not be an equilibrium point.

 Stability Definitions

Here, we will discuss some basic theorems about stability, and define a few key concepts.

The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t 0 , there exists an associated finite constant k(t 0 ) such that: [ edit ] Stability Definitions Here, we will discuss some basic theorems about stability, and define a few key concepts. The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t , there exists an associated finite constant k (t ) such that: Where sup is the supremum , or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point). Uniform Stability The system is defined to be uniformly stable if it is stable for all initial values of t : Uniform stability is a more general, and more powerful form of stability then was previously provided. Asymptotic Stability " id="pdf-obj-1-68" src="pdf-obj-1-68.jpg">

Where sup is the supremum, or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point).

Uniform Stability The system is defined to be uniformly stable if it is stable for all initial values of t 0 : [ edit ] Stability Definitions Here, we will discuss some basic theorems about stability, and define a few key concepts. The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t , there exists an associated finite constant k (t ) such that: Where sup is the supremum , or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point). Uniform Stability The system is defined to be uniformly stable if it is stable for all initial values of t : Uniform stability is a more general, and more powerful form of stability then was previously provided. Asymptotic Stability " id="pdf-obj-1-84" src="pdf-obj-1-84.jpg">

Uniform stability is a more general, and more powerful form of stability then was previously provided.

Asymptotic Stability

A system is defined to be asymptotically stable if: [ edit ] Marginal Stability Here we will discuss some rules concerning systems that are marginally stable. Because we are discussing eigenvalues and eigenvectors, these theorems only apply to time- invariant systems. 1. A time-invariant system is marginally stable if and only if all the eigenvalues of the system matrix A are zero or have negative real parts, and those with zero real parts are simple roots of the minimal polynomial of A. 2. The equilibrium x = 0 of the state equation is uniformly stable if all eigenvalues of A have non-positive real parts, and there is a complete set of distinct eigenvectors associated with the eigenvalues with zero real parts. 3. The equilibrium x = 0 of the state equation is exponentially stable if and only if all eigenvalues of the system matrix A have negative real parts. [ edit ] Eigenvalues and Poles An LTI system is stable (asymptotically stable, see above) if all the eigenvalues of A have negative real parts. Consider the following state equation: x ' = Ax ( t ) + Bu ( t ) We can take the Laplace Transform of both sides of this equation, using initial conditions of x = 0: sX ( s ) = AX ( s ) + BU ( s ) " id="pdf-obj-2-6" src="pdf-obj-2-6.jpg">

A time-invariant system is asymptotically stable if all the eigenvalues of the system matrix A have negative real parts. If a system is asymptotically stable, it is also BIBO stable. However the inverse is not true: A system that is BIBO stable might not be asymptotically stable.

Uniform Asymptotic Stability A system is defined to be uniformly asymptotically stable if the system is asymptotically stable for all values of t 0 . Exponential Stability A system is defined to be exponentially stable if the system response decays exponentially towards zero as time approaches infinity.

For linear systems, uniform asymptotic stability is the same as exponential stability. This is not the case with non-linear systems.

 Marginal Stability

Here we will discuss some rules concerning systems that are marginally stable. Because we are discussing eigenvalues and eigenvectors, these theorems only apply to time- invariant systems.

• 1. A time-invariant system is marginally stable if and only if all the eigenvalues of the system matrix A are zero or have negative real parts, and those with zero real parts are simple roots of the minimal polynomial of A.

• 2. The equilibrium x = 0 of the state equation is uniformly stable if all eigenvalues of A have non-positive real parts, and there is a complete set of distinct eigenvectors associated with the eigenvalues with zero real parts.

• 3. The equilibrium x = 0 of the state equation is exponentially stable if and only if all eigenvalues of the system matrix A have negative real parts.

##  Eigenvalues and Poles

An LTI system is stable (asymptotically stable, see above) if all the eigenvalues of A have negative real parts. Consider the following state equation:

x' = Ax(t) + Bu(t)

We can take the Laplace Transform of both sides of this equation, using initial conditions of x 0 = 0:

sX(s) = AX(s) + BU(s)

Subtract AX(s) from both sides:

sX(s) − AX(s) = BU(s) (sI A)X(s) = BU(s)

Assuming (sI - A) is nonsingular, we can multiply both sides by the inverse:

X(s) = (sI A) 1 BU(s)

Now, if we remember our formula for finding the matrix inverse from the adjoint matrix: [ edit ] Impulse Response Matrix We can define the Impulse response matrix , G (t, τ) in order to define further tests for stability: [Impulse Response Matrix] " id="pdf-obj-3-47" src="pdf-obj-3-47.jpg">

We can use that definition here: [ edit ] Impulse Response Matrix We can define the Impulse response matrix , G (t, τ) in order to define further tests for stability: [Impulse Response Matrix] " id="pdf-obj-3-51" src="pdf-obj-3-51.jpg">

Let's look at the denominator (which we will now call D(s)) more closely. To be stable, the following condition must be true:

D(s) = | (sI A) | = 0

And if we substitute λ for s, we see that this is actually the characteristic equation of matrix A! This means that the values for s that satisfy the equation (the poles of our transfer function) are precisely the eigenvalues of matrix A. In the S domain, it is required that all the poles of the system be located in the left-half plane, and therefore all the eigenvalues of A must have negative real parts.

##  Impulse Response Matrix

We can define the Impulse response matrix, G(t, τ) in order to define further tests for stability:

[Impulse Response Matrix] [ edit ] Impulse Response Matrix We can define the Impulse response matrix , G (t, τ) in order to define further tests for stability: [Impulse Response Matrix] " id="pdf-obj-3-79" src="pdf-obj-3-79.jpg">

The system is uniformly stable if and only if there exists a finite positive constant L such

that for all time t and all initial conditions t 0 with satisfied: [ edit ] Positive Definiteness These terms are important, and will be used in further discussions on this topic. ∑ ∑ ∑ ∑ f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0. for all x, and f(x) = 0 only if x = 0. A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness. [ edit ] Lyapunov Stability [ edit ] Lyapunov's Equation " id="pdf-obj-4-14" src="pdf-obj-4-14.jpg">

the following integral is [ edit ] Positive Definiteness These terms are important, and will be used in further discussions on this topic. ∑ ∑ ∑ ∑ f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0. for all x, and f(x) = 0 only if x = 0. A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness. [ edit ] Lyapunov Stability [ edit ] Lyapunov's Equation " id="pdf-obj-4-18" src="pdf-obj-4-18.jpg">

In other words, the above integral must have a finite value, or the system is not uniformly stable.

In the time-invariant case, the impulse response matrix reduces to: [ edit ] Positive Definiteness These terms are important, and will be used in further discussions on this topic. ∑ ∑ ∑ ∑ f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0. for all x, and f(x) = 0 only if x = 0. A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness. [ edit ] Lyapunov Stability [ edit ] Lyapunov's Equation " id="pdf-obj-4-24" src="pdf-obj-4-24.jpg">

In a time-invariant system, we can use the impulse response matrix to determine if the system is uniformly BIBO stable by taking a similar integral: [ edit ] Positive Definiteness These terms are important, and will be used in further discussions on this topic. ∑ ∑ ∑ ∑ f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0. for all x, and f(x) = 0 only if x = 0. A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness. [ edit ] Lyapunov Stability [ edit ] Lyapunov's Equation " id="pdf-obj-4-28" src="pdf-obj-4-28.jpg">

Where L is a finite constant.

##  Positive Definiteness

These terms are important, and will be used in further discussions on this topic.

f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if [ edit ] Positive Definiteness These terms are important, and will be used in further discussions on this topic. ∑ ∑ ∑ ∑ f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0. for all x, and f(x) = 0 only if x = 0. A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness. [ edit ] Lyapunov Stability [ edit ] Lyapunov's Equation " id="pdf-obj-4-59" src="pdf-obj-4-59.jpg"> [ edit ] Positive Definiteness These terms are important, and will be used in further discussions on this topic. ∑ ∑ ∑ ∑ f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0. for all x, and f(x) = 0 only if x = 0. A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness. [ edit ] Lyapunov Stability [ edit ] Lyapunov's Equation " id="pdf-obj-4-61" src="pdf-obj-4-61.jpg">

for all x, and f(x) = 0 only if x = 0.

for all x, and f(x) = 0 only if x = 0.

A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably.

Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness.

##  Lyapunov Stability

 Lyapunov's Equation

For linear systems, we can use the Lyapunov Equation, below, to determine if a system is stable. We will state the Lyapunov Equation first, and then state the Lyapunov Stability Theorem.

[Lyapunov Equation]

MA + A T M = − N

Where A is the system matrix, and M and N are p × p square matrices.

Lyapunov Stability Theorem An LTI system x' = Ax is stable if there exists a matrix M that satisfies the Lyapunov Equation where N is an arbitrary positive definite matrix, and M is a unique positive definite matrix.

Notice that for the Lyapunov Equation to be satisfied, the matrices must be compatible sizes. In fact, matrices A, M, and N must all be square matrices of equal size. Alternatively, we can write:

Lyapunov Stability Theorem (alternate) If all the eigenvalues of the system matrix A have negative real parts, then the Lyapunov Equation has a unique solution M for every positive definite matrix N, and the solution can be calculated by: ← Stability Discrete Time Stability → Category : Control Systems What do you think of this page? Please take a moment to rate this page below. Your feedback is valuable and helps us improve our website. Reliability: Completeness: Neutrality: Presentation: ∑ Log in / create accountBookDiscussionRead " id="pdf-obj-5-36" src="pdf-obj-5-36.jpg">

If the matrix M can be calculated in this manner, the system is asymptotically stable.

Control Systems

 ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ Community ∑ ∑ ∑ ∑ ∑ ∑ Toolbox Sister projects Print/export ∑ This page was last modified on 25 April 2011, at 23:09. ∑ Text is available under the Creative Commons Attribution-ShareAlike License; ∑ additional terms may apply. See Terms of Use for details. ∑ ∑ ∑ ∑ 