Anda di halaman 1dari 16

3/10/2013

Advanced Linear Control Systems


Dr. Zeeshan Khan
Department of Electrical Engineering Center for Emerging Sciences, Engineering & Technology (CESET), Islamabad

Course Objectives
The course addresses dynamic systems, i.e., systems that evolve with time. Systems that can be modeled by Ordinary Differential Equations (ODEs), and that satisfy certain linearity and timeinvariance conditions. Special consideration on MIMO systems. We will analyze the response of these systems to inputs and initial conditions: for example, stability and performance issues will be addressed. It is of particular interest to analyze systems obtained as interconnections (e.g., feedback) of two or more other systems. We will learn how to design (control) systems that ensure desirable properties (e.g., stability, performance) of the 2 interconnection with a given dynamic system.

3/10/2013

Course Outline
The course will be structured in several major sections: A review of linear algebra, and of least squares problems. Representation, structure, and behavior of multi-input, multi-output (MIMO) linear time-invariant (LTI) systems. Robust Stability and Performance. Approaches to optimal and robust control design. Hopefully, the material learned in this course will form a valuable foundation for further work in systems, control, estimation, identification, signal processing, and communications.
3

Assignments, Quizzes and Exams


At least 3 Assignments will be given during the course At least 6 quizzes will be taken There will be 2 Exams o 1 Mid-term o 1 Final exam

3/10/2013

Grading Policy
10% 10% 50% Assignments Quizzes 30% Mid term Final

Notes and Texts


Course Book: Stanislaw H. Zak, Systems and Control, Oxford University Press 2003. For beginners in Control: Ogata, K. Modern Control Engineering, Prentice Hall. Linear Algebra, Schaums Series. More references: D.G. Luenberger, Introduction to Dynamic Systems, Wiley, 1979. T. Kailath, Linear Systems, Prentice-Hall, 1980. J.C. Doyle, B.A. Francis, and A.R. Tannenbaum, Feedback Control Theory, Macmillan, 1992.
6

3/10/2013

Tentative Schedule
Lecture # 1 2 3 4 5 6 7 8 9 10 11 12 Date 02-03-2013 03-03-2013 09-03-2013 10-03-2013 16-03-2013 17-03-2013 23-03-2013 24-03-2013 30-03-2013 31-03-2013 06-04-2013 07-04-2013 Topic Introduction to dynamic systems and control, Matrix algebra Projection theorem, Least squares estimation Dynamical Systems and Modeling Mathematical Modeling and Examples Analysis of Modeling Equations Linearization differential equations Describing Functions for Nonlinear Systems Reachability, Controllability, Observability etc Companion forms and linear state feedback State Estimator and combined Controller Estimator Stability and methods to determine stability Stability of nonlinear systems and Lyapunov theorems Chapter Appendix Appendix Chapter 1 Chapter 1 Chapter 2 Chapter 2 Chapter 2 Chapter 3 Chapter 3 Chapter 3 Chapter 4 Chapter 4
7

What Is a System?
A system is characterized by two properties, which are as follows: 1. The interrelations between the components that are contained within the system 2. The system boundaries that separate the components within the system from the components outside

3/10/2013

What is a dynamic system?


A dynamical system consists of a set of possible states, together with a rule that determines the present state in terms of past states. The system quantities whose behavior can be measured or observed are referred to as the system outputs.
Input System output
9

Describing a Control Problem


The essential elements of the control problem, as described by Owens are as follows: 1. A specified objective for the system 2. A model of a dynamical system to be controlled 3. A set of admissible controllers 4. A means of measuring the performance of any given control strategy to evaluate its effectiveness
10

3/10/2013

Modeling a Dynamic System


A common model of a dynamical system is the finite set of ordinary differential equations of the form x(t) = f (t, x(t), u(t)), x(t0) = x0, y(t) = h(t, x(t), u(t)), Where, the state x Rn, the input u Rm, the output y Rp, and f and h are vector-valued functions with f : RRn Rm Rn and h : RRn Rm Rp. Another common model of a dynamical system is the finite set of difference equations, x(k + 1) = f (k, x(k), u(k)), x(k0) = x0, y(k) = h(k, x(k), u(k)), Where, x(k) = x(kh), u(k) = u(kh), h is the sampling interval, and k 0 is an integer.

11

Open-Loop Versus Closed-Loop


We distinguish between two types of control systems. They are: Open-loop control systems Closed-loop control systems

12

3/10/2013

Open loop System


An open-loop control system usually contains the following: 1. A process to be controlled, labeled plant 2. The controlling variable of the plant, called the plant input, or just input for short 3. The controlled variable of the plant, called the plant output, or just output for short 4. A reference input, which dictates the desired value of the output 5. A controller that acts upon the reference input in order to form the system input forcing the output behavior in accordance with the reference signal

13

Connecting Feedback and the Summing Point


6. The feedback loop where the output signal is measured with a sensor and then the measured signal is fed back to the summing junction 7. The summing junction, where the measured output signal is subtracted from the reference (command) input signal in order to generate an error signal, also labeled as an actuating signal

14

3/10/2013

Closed Loop System


In a closed-loop system the error signal causes an appropriate action of the controller, which in turn instructs the plant to behave in a certain way in order to approach the desired output, as specified by the reference input signal. Thus, in the closed-loop system, the plant output information is fed back to the controller, and the controller then appropriately modifies the plant output behavior. A controller, also called a compensator, can be placed either in the forward loop, as in Figure, or in the feedback loop.
15

Closed Loop System (Contd )

16

3/10/2013

Axiomatic Definition of a Dynamical System


Following Kalman, we can define a dynamical system formally using the following axioms: 1. There is given a state space X; there is also T , an interval in the real line representing time. 2. There is given a space U of functions on T that represent the inputs to the system. 3. For any initial time t0 in T , any initial state x0 in X, and any input u in U defined for t t0 the future states of the system are determined by the transition mapping : T X U X written as (t1; t0, x(t0), u(t)) = x(t1).
17

Axiomatic Definition of a Dynamical System (Contd )


4. The identity property of the transition mapping, that is, (t0; t0, x(t0), u(t0)) = x(t0). 5. The semigroup property of the transition mapping, that is, (t2; t0, x(t0), u(t)) = (t2; t1,(t1; t0, x(t0), u(t)), u(t)). The semigroup property axiom states that it is irrelevant whether the system arrives at the state at time t2 by a direct transition from the state at time t0, or by first going to an intermediate state at time t1, and then having been restarted from the state at time t1 and moving to the state at time t2. In either case the system, satisfying the semigroup axiom, will arrive at the same state at time t2. This property is illustrated in Figure. c

18

3/10/2013

Axiomatic Definition of a Dynamical System (Contd )


6. The causality property, that is, (t; t0, x(t0), u1(t)) = (t; t0, x(t0), u2(t)) for t0, t T if and only if u1(t) = u2(t) for all t T . 7. Every output of the system is a function of the form h : T X U Y, where Y is the output space. 8. The transition mapping and the output mapping h are continuous functions. 9. If in addition the system satisfies the following axiom, then it is said to be time-invariant: (t1; t0, x(t0), u(t)) = (t1 + ; t0 + , x(t0), u(t)), where t0, t1 T . Thus, a dynamical system can be defined formally as a quintuple {T,X,U,Y} satisfying the above axioms.
19

Axiomatic Definition of a Dynamical System (Contd )


Our attention will be focused on dynamical systems modeled by a set of ordinary differential equations xi = fi (t, x1, x2, . . . , xn, u1, u2, . . . , um), xi (t0) = xi0, i = 1, 2, . . . , n, together with p functions, yj = h j (t, x1, x2, . . . , xn, u1, u2, . . . , um), j = 1, 2, . . . , p. The system model state is x = [x1 x2 xn]T Rn. The system input is u = [u1 u2 um]T Rm, and the system output is y = [y1 y2 yp ]T Rp. In vector notation the above system model has the form x = f (t, x, u), x(t0) = x0, y = h(t, x, u), where f : R Rn Rm Rn and h : R Rn Rm Rp are vector-valued functions. In further considerations, we regard a dynamical system under considerations and its model represented by the above equations as equivalent. We now illustrate dynamical system axioms on a simple example.

20

10

3/10/2013

Mathematical Modeling Process

21

Review of Work and Energy Concepts


Suppose we are given a particle of a constant mass m subjected to a force F. Then, by Newtons second law we have, Here the force F and the velocity v are vectors. Therefore, they can be represented as Using the above notation, we represent equation
22

11

3/10/2013

Review (contd )
Suppose now that a force F is acting on a particle located at a point A and the particle moves to a point B. The work W done by F along infinitesimally small distance s is
23

Review (contd )
where s = [x1 x2 x3]T . The work WAB done on the path from A to B is obtained by integrating the above equation: We now would like to establish a relation between work and kinetic energy. For this observe that

24

12

3/10/2013

Review (contd )
Using the above relation and Newtons second law, we express WAB in a different way as

25

Review (contd )
The above relation can be used to define the kinetic energy of a particle as the work required to change its velocity from some value vA to a final value vB. The relation WAB = KBKA = K is also known as the workenergy theorem for a particle.

26

13

3/10/2013

Potential and Kinetic Energy


A force is conservative if the work done by the force on a particle that moves through any round trip is zero. In other words, a force is conservative if the work done by it on a particle that moves between two points depends only on these points and not on the path followed. We now review the notion of potential energy, or the energy of configuration. Recall that if the kinetic energy K of a particle changes by K, then the potential energy U must change by an equal but opposite amount so that the sum of the two changes is zero; that is, K+U = 0 This is equivalent to saying that any change in the kinetic energy of the particle is compensated for by an equal but opposite change in the potential energy U of the particle so that their sum remains constant; that is, K + U = constant
27

The potential energy of a particle represents a form of stored energy that can be recovered and converted into the kinetic energy. If we now use the workenergy theorem, then we obtain W = K = U. The work done by a conservative force depends only on the starting and the end points of motion and not on the path followed between them. Therefore, for motion in one dimension, we obtain thus,
28

14

3/10/2013

Thus, we can write: A generalized equation to motion in three dimensions yields: Thus,

There exist a property of a conservative vector field that the work done by it on a particle that moves between two points depends only on these points and not on the path followed.
29

We now represent Newtons equation in an equivalent format that establishes a connection with the Lagrange equations of motion, which are discussed in the following section. To proceed, note that Hence: Or:

30

15

3/10/2013

Let L = K U; The function L defined above is called the Lagrangian function or just the Lagrangian. Note that

It can be simplified into:

Equations above are called the Lagrange equations of motion, in Cartesian coordinates, for a single particle. They are just an equivalent representation of Newtons equations as described above.
31

16

Anda mungkin juga menyukai