Control Systems
and Control Engineering
Table of Contents
All Versions
PDF Version
Stability
Glossary
Contents
[hide]
1 State-Space Stability o 1.1 Stability Definitions o 1.2 Marginal Stability 2 Eigenvalues and Poles 3 Impulse Response Matrix 4 Positive Definiteness 5 Lyapunov Stability
o
Also, a key concept when we are talking about stability of systems is the concept of an equilibrium point: Equilibrium Point Given a system f such that: x'(t) = f(x(t))
A particular state xe is called an equilibrium point if f(xe) = 0 for all time t in the interval , where t0 is the starting time of the system.
An equilibrium point is also known as a "stationary point", a "critical point", a "singular point", or a "rest state" in other books or literature. The definitions below typically require that the equilibrium point be zero. If we have an equilibrium point xe = a, then we can use the following change of variables to make the equilibrium point zero:
We will also see below that a system's stability is defined in terms of an equilibrium point. Related to the concept of an equilibrium point is the notion of a zero point: Zero State A state xz is a zero state if xz = 0. A zero state may or may not be an equilibrium point.
Where sup is the supremum, or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point). Uniform Stability The system is defined to be uniformly stable if it is stable for all initial values of t0:
Uniform stability is a more general, and more powerful form of stability then was previously provided. Asymptotic Stability
A time-invariant system is asymptotically stable if all the eigenvalues of the system matrix A have negative real parts. If a system is asymptotically stable, it is also BIBO stable. However the inverse is not true: A system that is BIBO stable might not be asymptotically stable. Uniform Asymptotic Stability A system is defined to be uniformly asymptotically stable if the system is asymptotically stable for all values of t0. Exponential Stability A system is defined to be exponentially stable if the system response decays exponentially towards zero as time approaches infinity. For linear systems, uniform asymptotic stability is the same as exponential stability. This is not the case with non-linear systems.
Subtract AX(s) from both sides: sX(s) AX(s) = BU(s) (sI A)X(s) = BU(s) Assuming (sI - A) is nonsingular, we can multiply both sides by the inverse: X(s) = (sI A) 1BU(s) Now, if we remember our formula for finding the matrix inverse from the adjoint matrix:
Let's look at the denominator (which we will now call D(s)) more closely. To be stable, the following condition must be true: D(s) = | (sI A) | = 0 And if we substitute for s, we see that this is actually the characteristic equation of matrix A! This means that the values for s that satisfy the equation (the poles of our transfer function) are precisely the eigenvalues of matrix A. In the S domain, it is required that all the poles of the system be located in the left-half plane, and therefore all the eigenvalues of A must have negative real parts.
The system is uniformly stable if and only if there exists a finite positive constant L such that for all time t and all initial conditions t0 with the following integral is satisfied:
In other words, the above integral must have a finite value, or the system is not uniformly stable. In the time-invariant case, the impulse response matrix reduces to:
In a time-invariant system, we can use the impulse response matrix to determine if the system is uniformly BIBO stable by taking a similar integral:
f(x) is positive definite if f(x) > 0 for all x. f(x) is positive semi-definite if for all x, and f(x) = 0 only if x = 0. f(x) is negative definite if f(x) < 0 for all x. f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0.
A matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably. Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness.
For linear systems, we can use the Lyapunov Equation, below, to determine if a system is stable. We will state the Lyapunov Equation first, and then state the Lyapunov Stability Theorem.
[Lyapunov Equation]
MA + ATM = N Where A is the system matrix, and M and N are p p square matrices. Lyapunov Stability Theorem An LTI system x' = Ax is stable if there exists a matrix M that satisfies the Lyapunov Equation where N is an arbitrary positive definite matrix, and M is a unique positive definite matrix. Notice that for the Lyapunov Equation to be satisfied, the matrices must be compatible sizes. In fact, matrices A, M, and N must all be square matrices of equal size. Alternatively, we can write: Lyapunov Stability Theorem (alternate) If all the eigenvalues of the system matrix A have negative real parts, then the Lyapunov Equation has a unique solution M for every positive definite matrix N, and the solution can be calculated by:
If the matrix M can be calculated in this manner, the system is asymptotically stable. Stability Control Systems Category: Control Systems What do you think of this page? Discrete Time Stability
Please take a moment to rate this page below. Your feedback is valuable and helps us improve our website. Reliability: Completeness: Neutrality: Presentation:
Main Page Help Browse Cookbook Wikijunior Featured books Recent changes Donations Random book
Reading room Community portal Bulletin Board Help out! Policies and guidelines Contact us
This page was last modified on 25 April 2011, at 23:09. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details. Privacy policy About Wikibooks Disclaimers