Anda di halaman 1dari 16

Repetitive control systems: old and new ideas

George WEISS
Center for Systems and Control Engineering School of Engineering, University of Exeter Exeter EX4 4QF, United Kingdom e-mail: G.Weiss@exeter.ac.uk

Abstract: We discuss the concept of a repetitive control system, and recall the main stability theorem for such systems. We give new, improved estimates on the error signal in a repetitive control system. As an extension of repetitive control, we introduce a new control system which is capable of tracking (or rejecting) a superposition of several periodic signals with dierent periods. Keywords: periodic signal; internal model; delay line; robust stabilization; innite-dimensional system; regular linear system; exponential stability; steady-state response.

1. What is a repetitive control system


Many signals in engineering are periodic, or at least they can be well approximated by a periodic signal over a large time interval. This is true, for example, for most signals associated with engines, electrical motors and generators, converters, or machines performing a task over and over again. Thus, it is a natural control problem to try to track a periodic signal with the output of a plant, or (what is almost the same), to try to reject a periodic disturbance acting on a control system. We examine this problem in Sections 1 and 3 of this paper (Section 2 is for background). In Section 4 we shall indicate a way of generalizing these ideas to cope with superpositions of periodic signals of arbitrary periods. We assume that the plant to be controlled is linear, time-invariant and nite-dimensional. Then one possible approach to the tracking and/or disturbance rejection problem described above is to use the internal model principle of Francis and Wonham [1]. This leads to the control system shown in Figure 1, where P is the transfer function of the plant, C is the transfer function of the compensator and M is the transfer function of the

internal model. In the same diagram, r is the reference signal, d is the disturbance, and e is the error signal which should be made small.

d r
- h + 6

p M

- ? h
+

y P

Figure 1: Repetitive control system. M is the internal model, C is the stabilizing compensator and P is the plant. The reference r and the disturbance d are periodic with period , and the error e should be kept small. The internal model and the compensator are also linear and time-invariant systems. The internal model is innite-dimensional and it is capable of generating signals which are similar to r and d, that is, periodic signals of a given period . The compensator is nite-dimensional. Neither of the three subsystems is stable, in general, but the compensator must be designed such that the whole feedback system shown in Figure 1 should be exponentially stable. The internal model and the compensator are the two components of the controller. Such a feedback system is called a repetitive control system. It has been studied by Inoue, Nakano and Iwai [6], by Inoue, Nakano, Kubo, Matsumoto and Baba [7] and by others. Probably the main references about this subject are the papers by Hara, Yamamoto, Omata and Nakano [4] and by Yamamoto [17]. A closely related and active area of control theory is iterative learning control, see for example the books by Rogers and Owens [12] and by Moore [10]. For the sake of simplicity, in this paper we shall consider that all signals are scalar, so that the three subsystems in Figure 1 are SISO. Our arguments will be in the frequency domain and (again to simplify the exposition) the connections to the state space theory will be mentioned briey and without proof. We shall not strive to state the results in their greatest generality. We assume that the internal model has the following transfer function: M(s) = 1 1 e s W(s) , (1.1)

where W is a real-rational stable transfer function with W

1. 2

(1.2)

In (1.2), W denotes the H norm (the supremum of |W(s)| over all s C with Re s > 0). Such a transfer function M can be obtained by connecting a delay line into a feedback loop, as shown in Figure 2.

- h + 6 +

e s 

W(s) 

Figure 2: The structure of the internal model M. Usually, W is a low-pass lter, needed both for technological and for robustness reasons. We explain very briey how such a control system works, and what the role of W is (the rigorous details will be given in Section 3). First we consider the case W = 1. Then M has innitely many poles on the imaginary axis, namely, ki, where k Z and = 2 . Thus, the transfer function G from r to e has zeros in the same points, because G = (1 + PCM)1 , (1.3)

and there are no unstable pole-zero cancellations in the product PCM (since the whole system is stable). The same is true for the transfer function from d to e, which is GP. These zeros of G imply that if r, d L2 loc [0, ) are periodic with period , then e (the Laplace transform of e) will not have poles at the points ki. This, combined with the stability of the system implies that for some > 0 we have
0

et |e(t)|2 dt < ,

(1.4)

which in particular is much stronger than e L2 [0, ).

In practice, we cannot choose W = 1, for three reasons:

(1) The rst reason is technological: we cannot realize a delay line with innite bandwidth, and thus any delay line that we can build has to be modelled as an ideal delay line in series with a low-pass lter. (2) The conditions that C must satisfy in order to stabilize the whole system are the following: it must stabilize P (as if M would not exist, meaning M = 1), and moreover W(1 + PC)1

< 1. 3

(1.5)

If W = 1, then (1.5) can be satised if and only if P1 H , which is almost never the case. But if W is a genuine lter, then the chances of nding a C satisfying (1.5) become much better. (3) If W = 1, then the resulting feedback system, even if it is stable, is not robustly stable with respect to delays. This means that arbitrarily small delays at any point in the feedback loop shown in Figure 1 will destabilize the system. We can overcome this problem by imposing |W()| = 0 and |P()C()| < 1 . (1.6)

(These conditions can be relaxed, see Section 3 for details, but in any case we need that |W()| < 1.)

The above considerations show that W must be a nonconstant lter. How should we choose W and what tracking and disturbance rejection performances can we hope for? The answer, roughly, goes like this: In every practical situation, r and d are conned to a certain frequency band [l , u ] (the lower bound l might be zero, but the upper bound u cannot be innity). This means that r can be written as a nite Fourier series (recall that = 2 ): r(t) =
|k |[l ,u ]

rk eikt .

A similar formula holds for d, of course. Choose W such that W(i ) is very close to 1 for [l , u ]. Then M will have poles very close to ki, for those k Z for which |k | [l , u ]. Now G will have zeros at the poles of M, and hence it will be close to zero (but not quite) at the points ki, for the relevant values of k . The error signal can be decomposed into two components: e = ess + etr (1.7)

where the steady-state error ess L2 loc [0, ) is periodic and small, and the transient error etr behaves as in (1.4). The size of ess is measured by its L2 -norm over one period (of length ). The closer W(i) is to 1 in the relevant frequency band, the smaller is ess . Precise statements and formulas will be given in Section 3.

2. Some background on regular transfer functions and the feedback connection


In this section we recall some known facts about well-posed and regular transfer functions, stability, stabilizing compensators, and robustness with 4

respect to small delays. As already mentioned, we consider only SISO systems and transfer functions, without repeating this assumption. We make the following convention: If a meromorphic function is dened on some right half-plane and can be extended meromorphically to a greater right half-plane, we will not make any distinction between the initial function and its extension. This will not lead to confusions, since the extension (if it exists) is unique.
For each R, H denotes the space of bounded and analytic functions on the right half-plane

C = { s C | Re s > } .
It is well-known that H is a Banach space with the supremum norm. With the convention of the previous paragraph, we have that H H

if

For = 0 we use also the notation H instead of H0 . A well-posed transfer function is an element of one of the spaces H . The well-posed transfer functions form an algebra (i.e., we can add and multiply them). Any well-posed transfer function G denes a shift-invariant 2 and continuous operator F on L2 loc [0, ). If u Lloc [0, ) has a Laplace transform u , then y = F u is given via its Laplace transform y , as follows:

y = Gu .

(2.1)

Let G be a well-posed transfer function. We say that G is exponentially s stable if G H for some < 0. For example, e s and 1 1) (with s (e > 0) are exponentially stable. For each R, we denote by L2 [0, ) the space of functions of the form et v (t), where v L2 [0, ). If G H and u L2 [0 , ) for some , then the function y dened by (2.1) is 2 also in L [0, ). For this reason, in view of the particular case = = 0, we say that G is L2 -stable if G H . A well-posed transfer function G is called regular if the limit
+

lim G() = D

exists. In this case, the number D is called the feedthrough value of G. For example, any well-posed transfer function obtainable from rational functions and delays by nitely many algebraic operations is regular (this includes all the transfer functions which arise in feedback systems of the type encountered in the previous sections). For a detailed discussion of wellposed and of regular transfer functions we refer to Weiss [14] and [15]. 5

We shall not explain here what a well-posed linear system is: the reader may consult, for example, [14]. The transfer function of any well-posed linear system is well-posed. Conversely, for any well-posed transfer function G we can nd many well-posed linear systems whose transfer function is G , as follows from results in Salamon [13]. A well-posed linear system is called regular if its transfer function is regular. Such systems have a simple description via their generating operators A, B, C, D, which are the analogues of the matrices appearing in the usual representation x = Ax + Bu, y = Cx + Du of nite-dimensional linear systems. The operators A, B and C are unbounded in general. For details we refer again to [14]. If a well-posed linear system is exponentially stable, then its transfer function is also exponentially stable. The converse is false (also in nite dimensions), but the following result has been proved by Rebarber [11]: if a regular linear system is stabilizable, detectable and its transfer function is L2 -stable, then the system is exponentially stable. For the precise denition of stabilizability and detectability (at the level of generality needed here) we refer to [11] and to Weiss and Curtain [16]. Let P and C be well-posed transfer functions. We say that C stabilizes 1 1 P P if the matrix L = is L2 -stable, i.e., each of its four entries C 1 is in H . This means that if we connect P and C in a feedback loop with two external inputs, as in Figure 3, then the four transfer functions from r and d to e and u are L2 -stable. If both P and C are L2 -stable, then C stabilizes P if and only if (1 + PC)1 is L2 -stable. For details and for further references on this subject we refer to the survey paper of Logemann [8] and to Georgiou and Smith [3]

d r
- h + 6

- ? h
+

y P

Figure 3: The standard feedback connection of two systems. We may think of P as the plant, C as the compensator, r as the reference, d as the disturbance and e as the tracking error. Using the result of Rebarber mentioned earlier, the following proposition was proved in Section 4 of [16] (here we give the simplied version corresponding to SISO systems). 6

Proposition 2.1. Let p and c be stabilizable and detectable SISO regular linear systems, with transfer functions P and C, respectively. Then the feedback connection shown in Figure 3 denes an exponentially stable regular linear system if and only if C stabilizes P. Let P and C be well-posed transfer functions. For each 0, we dene the transfer function C by C (s) = es C(s) . We say that C stabilizes P robustly with respect to delays if there exists an 0 > 0 such that for each [0, 0 ], C stabilizes P. The intuitive meaning of this is that the introduction of suciently small delays into the feedback loop shown in Figure 3 does not destroy its L2 -stability. This concept has been introduced in Logemann, Rebarber and Weiss [9] and it is closely related (in the SISO case, almost equivalent) to the concept of w-stability, introduced by Georgiou and Smith in [2] and further studied by them in [3]. The following theorem is taken from Section 8 of [9]. Theorem 2.2. Let P and C be regular transfer functions and suppose that C stabilizes P. Let be dened by = lim sup |P(s)C(s)| .
|s|

(2.2)

() If < 1, then C stabilizes P robustly with respect to delays. () If > 1, then C does not stabilize P robustly with respect to delays. In the simple particular case when P and C are rational, the condition < 1 becomes |P()C()| < 1, which we have encountered in Section 1. Let P and C be regular transfer functions which are meromorphic on C for some < 0. It follows from the last theorem that if C stabilizes P and P has innitely many poles in the closed right half-plane (where Re s 0), then C does not stabilize P robustly with respect to delays. (This statement is the SISO version of Theorem 1.2 in [9].) Suppose that the systems p and c are as in Proposition 2.1, and their feedback connection (shown in Figure 3) is exponentially stable. This feedback system is called robustly stable with respect to delays if there exists an 0 > 0 such that for each [0, 0 ], any feedback system obtained from the previous one by introducing a delay of into the feedback loop, is still exponentially stable. From Proposition 2.1 it follows that this is the case if and only if C stabilizes P robustly with respect to delays, so that we can use Theorem 2.2 to verify if the condition is satised. 7

3. The main theorems for a single period


In this section we state our results concerning tracking and disturbance rejection of periodic signals. Thus, we are dealing with a single period , the repetitive control system is as shown in Figure 1 and the internal model is as in Figure 2. We give only a short outline of the proofs. As in Section 1, we assume that the plant is a linear time-invariant nitedimensional SISO system. Moreover, the plant is assumed to be stabilizable and detectable (for example, this is the case if the plant is minimal). Thus, its transfer function, denoted by P, is a rational proper scalar function. Exactly the same assumptions are made about the lter in cascade with the delay line, whose transfer function is denoted by W, and about the stabilizing compensator, whose transfer function is denoted by C. Moreover, as in Section 1, we assume that W H and W 1. The delay line with transfer function e s is realized as in [14], so that its state space is L2 [, 0], and it is exponentially stable (its growth bound is ). The main stability and robustness result for the repetitive control system shown in Figures 1 and 2 is the following. Theorem 3.1. Assume that the compensator transfer function C satises the following three conditions: (1) (1 + PC)1 H , (2) there are no unstable pole-zero cancellations in the product PC, (3) W(1 + PC)1

< 1.

Then the feedback system shown in Figures 1 and 2 is exponentially stable. If, moreover, the conditions |W()| < 1 and |P()C()| < 1 |W()| (3.1)

hold, then this feedback system is robustly stable with respect to delays. The rst part of this theorem (the exponential stability in state space) is due to Hara, Yamamoto, Omata and Nakano [4] (see their Corollary 1, with C2 = 0). The robustness with respect to delays, claimed in the second part of the theorem, also refers to exponential stability in the state space, as introduced at the end of Section 2. Note that the above theorem does not make any mention of the reference or disturbance signals appearing in Figure 1, or of any periodic signals: these will make their entry only in the next theorem. We sketch our proof for both parts because our reasoning is dierent from the one in [4], and it is instructive for the multi-periodic case which we want to handle later. 8

Sketch of the proof. First we prove the exponential stability. By simple transformations, the feedback system (with no inputs, only with an initial state) is equivalent to the one shown in Figure 4, where S is the transfer function of a nite-dimensional system s containing everything except the delay line.

S(s)

e s 
Figure 4: A feedback system which, when there are no inputs, is equivalent to the one in Figures 1 and 2. The subsystem s with transfer function S contains everything but the delay line. A simple nite-dimensional argument starting from conditions (1) and (2) yields that s is stable (in the state-space) and its transfer function is S = W(1 + PC)1 . It now follows from Proposition 2.1 that the system in Figure 4 is exponentially stable i (1 e s S(s))1 H . Since e s = 1, it follows from condition (3) that this is indeed the case. Now we turn to the robustness part of the theorem. It is not dicult to show that both the internal model (with transfer function M) and the cascade connection of the compensator and the plant (with transfer function PC) are stabilizable and detectable regular linear systems. Hence, as explained at the end of Section 2, their feedback connection is robustly stable with respect to delays if lim sup |P(s)C(s)M(s)| < 1 .
|s|

(3.2)

Since (by (3.1)) |W()| < 1, the left-hand side of (3.2) can be computed to be |P()C()|/(1 |W()|), so that (3.1) implies (3.2). 2 Given a rational proper P, the problem of nding a rational proper C such that conditions (1), (2) and (3) of Theorem 3.1 should be satised is called the weighted sensitivity H problem. This has been extensively studied in the recent robust control literature and good algorithms (and 9

programs) for its solution are available, see for example Green and Limebeer [5]. It may happen that the problem has no solution, in which case we should try to modify the lter W. Note that (1.6) is more restrictive than (3.1), but in most practical applications we expect the conditions (1.6) to be satised. The condition |W()| < 1 cannot be eliminated from (3.1) (by allowing |W()| = 1 with P()C() = 0). Indeed, if we had |W()| = 1, then M would have innitely many poles on the imaginary axis. In this case, according to the comments after Theorem 2.2, PC would not stabilize M robustly with respect to delays. The next lemma concerns decompositions of signals into steady-state and transient parts, as in (1.7). The reader is requested to recall the notations H and L2 [0, ) from Section 2.
Lemma 3.2. Let G H for some < 0. Assume that u L2 loc [0, ) has the decomposition u = uss + utr , where utr L2 [0 , ) with 0, and uss is periodic with period . If y is the output function corresponding to the input function u and the transfer function G, as in (2.1), then y has a similar decomposition: y = yss + ytr , with ytr L2 [0, ) and yss is periodic with period .

Moreover, if the sequences (uk ) and (yk ) are the Fourier coecients of uss and yss , i.e., denoting = 2 , uss (t) = kZ then yk = G(ik ) uk . (3.3) uk eikt , yss (t) = kZ yk eikt ,

We do not give the proof of this lemma, only mention the fact that, in spite of this being a purely frequency domain statement, the only way the author knows to prove that ytr L2 [0, ) is by state space methods, invoking a realization of G. The following theorem concerns the situation when the feedback system shown in Figures 1 and 2 is exponentially stable and the external signals r (reference) and d (disturbance) are periodic with period , i.e., r(t) = kZ 10 rk eikt , d(t) = kZ dk eikt , (3.4)

We remark that the sequences (uk ) and (yk ) are in l2 , of course, and the two Fourier series appearing above converge in the L2 -sense. Note that in the particular case when utr = 0, we can take = , obtaining that ytr L2 [0, ) (this case will be used later).

where, as usual, = 2 . We assume that these signals are in L2 loc [0, ), so that their Fourier coecients (rk ) and (dk ) are in l2 . We show that the decomposition (1.7) of the error signal holds and we give a formula for the Fourier coecients of the steady-state error. Theorem 3.3. With the notation of Theorem 3.1, assume that the conditions (1), (2) and (3) are satised. Suppose that r and d are in L2 loc [0, ) and are periodic with period , as in (3.4). Then the error e can be decomposed as e = ess + etr , where etr L2 [0, ) for some < 0 and ess is periodic, with period . Let (ek ) be the sequence of Fourier coecients of ess , similarly as in (3.4). Then, denoting S0 = (1 + PC)1 , ek = (1 W(ik ))S0 (ik ) [rk P(ik )dk ] . 1 W(ik )S0 (ik ) (3.5)

Sketch of the proof. By Theorem 3.1, the feedback system of Figure 1 is exponentially stable, so that its sensitivity G = (1 + PCM)1 belongs to H , for some < 0 which is greater then the growth bound of the operator semigroup of the system. If the initial state of the system is zero, then ) , e = G( r Pd (3.6)

so that, according to Lemma 3.2, e can be decomposed into a steady-state and a transient part, as claimed in the theorem. If the initial state is not zero, then it generates an error signal which belongs to L2 [0, ), so that it can be absorbed into etr and it does not aect ess . A simple computation shows that G(s) = (1 e s W(s))S0 (s) . 1 e s W(s)S0 (s) 2

Applying formula (3.3) to this particular situation, we obtain (3.5).

An important conclusion which we can draw from formula (3.5) is the following: if the Fourier coecients of r and d are concentrated in a frequency band (i.e., they are very small outside this band) and W(ik ) is very close to 1 in this band, then the coecients ek will be very small, meaning that the L2 -norm of ess over a period will be very small. To illustrate how (3.5) can be used to get estimates, assume that d = 0 and rk vanishes outside the frequency band |k | [l , u ]. Denote = WS0 and 0 = S0 , so that < 1 by condition (3). From (3.5) and the Parseval equality we get ess
L2 [0, ]

0 max |1 W(ik )| r 1 |k |[l ,u ] 11

L2 [0, ] .

We want to emphasize the strength of the conclusion etr L2 [0, ) in the last theorem. This implies that
t+ t

|etr ( )|2 d m e2t ,

belong to for some m > 0 and for all t 0. If the derivatives r and d L2 [0 , ), then the theorem can be applied to the derivatives, yielding loc that e tr L2 [0 , ) holds as well. From this we can deduce that etr (t) converges to zero at an exponential rate.

4. How to track (or reject) a superposition of two periodic signals


In certain engineering problems, it could be a reasonable assumption that the reference r and/or the disturbance d are superpositions of several periodic functions. For the sake of simplicity, we assume in this paper that there are only two periods involved, 1 and 2 : r = r 1 + r2 , r1 (t + 1 ) = r1 (t) , r2 (t + 2 ) = r2 (t) , (4.1)

and similarly for d = d1 +d2 . For example, it might happen that r is periodic with period 1 and d is periodic with period 2 , in which case r2 = d1 = 0. All these signals are assumed to be in L2 loc [0, ). The generalization of our results to any nite number of periods is straightforward. We are trying to use again the feedback structure from Figure 1, but with a more sophisticated internal model. Since a good internal model should be able to generate functions as in (4.1), it is an obvious guess to try to take M to be the sum of two partial internal models, one of which generates signals with period 1 and the other with period 2 . This guess turns out to be almost correct, but more precisely, the internal model should be a convex combination of models of the type (1.1): M = 1 M1 + 2 M2 , where 1 > 0, 2 > 0, 1 + 2 = 1 and for j = 1, 2 Mj (s) = 1 1 ej s W(s) . (4.3) (4.2)

For simplicity, we have assumed that the lters used in the partial internal models M1 and M2 are identical. Their transfer function W is such that W

1 and |W()| < 1 . 12

(4.4)

The above conditions imply that there are only nitely many unstable poles of M1 and M2 and these are on the imaginary axis. Moreover, we require W to be such that M1 and M2 should not have any common unstable poles. The intuitive explanation of this condition is the following: if the two partial internal models had a common unstable pole i, then each of them could generate functions of the form p(t) = p0 ei , where p0 is a constant. These could cancel each other out at the output of the internal model, so that the compensator C would not see them. Then, assuming r = d = 0, the compensator would not generate any response, u = y = e = 0, and the partial internal models would continue to generate their signals forever, meaning that the feedback system is unstable. Our assumptions on the plant with the transfer function P and the compensator with the transfer function C are the same as in Section 3: stabilizability and detectability. We use the same realization for the delay lines as in Section 3 and we assume that the two lters with transfer function W are stable and satisfy (4.4). Given such an internal model with a relatively complicated structure, it is somewhat surprizing that our stability and robustness result is almost identical to Theorem 3.1: Theorem 4.1. Assume that the compensator transfer function C satises the three conditions listed in Theorem 3.1, W satises (4.4) and M1 , M2 do not have any common unstable poles. Then the feedback system shown in Figure 1, with M given by (4.2) and (4.3), is exponentially stable. If, moreover, the condition |P()C()| < 1 |W()| holds, then this feedback system is robustly stable with respect to delays. Note that the assumptions of the theorem imply that W(0) < 1. Sketch of the proof. First we prove the exponential stability. Like in the proof of Theorem 3.1, by simple transformations we obtain an equivalent system as in Figure 4, but the single delay gets replaced by two delay lines, having as transfer function the 2 2 matrix (s) = diag(e1 s , e2 s ). The nite-dimensional system which contains everything else is stable (by conditions (1) and (2)) and its transfer function is Se = W 1 0 0 1 + PC 1 1 2 2
1

which replaces S = W(1 + PC)1 in Figure 4. We introduce the numbers 1 = 1 , 2 = 2 and the matrices R= 2 /1 0 0 1 , V = 1 2 13 2 1 .

Note that V is unitary, V V = I . Then Se = W R V (1 + PC)1 0 0 1 V R1 .

By the matrix-valued version of Proposition 2.1, the feedback system is exponentially stable i (I Se )1 H . Since R = R, this condition is equivalent to the invertibility in H of the matrix-valued function E = I V S 0 0 W V.

Suppose that E1 has a pole p C , then 1 must be an eigenvalue of I E(p). Let x0 = 0 be a corresponding eigenvector, then |S(p)| < 1 implies that |W(p)| = 1 and z0 = V x0 is a scalar multiple of [0 1] . Now by a simple argument W(p)(p)xo = x0 , which implies that p is a common pole of M1 and M2 , in contradiction with the assumptions. Thus, E1 has no poles in C , and the feedback system is exponentially stable. The proof of the robustness part is done similarly as for Theorem 3.1, with a few minor extra points to be considered. 2 As in Section 3, we denote by G the sensitivity of the multi-periodic repetitive control system discussed above, i.e., G = (1 + PCM)1 . We have to point out that, although Theorem 4.1 guarantees the exponen tial stability of the feedback system, so that G H for some < 0, G might have poles very close to the imaginary axis. Indeed, if p1 is a pole of M1 , p2 is a pole of M2 , and p1 is very close to p2 , then there will be a pole p of G very close to p1 and p2 . If, moreover, p1 and p2 are very close to the imaginary axis, then obviously so will be p. The design procedure for W and C. We describe how the lter and compensator have to be chosen in order to achieve tracking and disturbance rejection. We assume that the signals r = r1 + r2 and d = d1 + d2 are as 2 in (4.1). We denote 1 = 2 1 and 2 = 2 (these are the two fundamental frequencies). We assume further that r and d are concentrated in a nite frequency band [l , u ], which means that the Fourier coecients of r1 and d1 (see (3.4)) are very small if the index k Z is such that |k1 | [l , u ], 14

Since = V = V = 1, it is easy to see, using (4.4) and condition (3), that there is a > 0 such that E1 is uniformly bounded for |s| > and Re s 0. It remains to check the uniform boundedness of E1 on the compact set C = {s C | |s| , Re s 0}, which is equivalent to checking that E1 has no poles in C .

and similarly for r2 and d2 . We choose W such that (4.4) holds and W(i ) is very close to 1 if | | [l , u ]. Then M1 will have poles very close to ik1 if |k1 | [l , u ], and similarly for M2 . Some of these poles may actually be on the imaginary axis (this can happen where |W(i )| = 1). If a pole of M1 on the imaginary axis coincides with a pole of M2 , then we have to modify our choice of W in order to satisfy the assumptions of Theorem 4.1. Now, using H control theory (and a program package such as the Robust Control Toolbox of MATLAB, a trademark of The MathWorks, Inc.) we design the compensator with transfer function C such that the three conditions of Theorem 3.1 are satised. If this is impossible, then again we have to modify our choice of W, possibly by a compromise in quality: W(i) might get further away from 1, for [l , u ]. After the above design procedure has been successfully completed, from the formula of G we see that |G(i )| is very small if | | [l , u ] and = k1 or = k2 , with k Z. It follows from Lemma 3.2, Theorem 4.1 and superposition that the error signal e can be decomposed as follows: e = ess1 + ess2 + etr , where etr L2 [0, ) for some < 0, ess1 is periodic with period 1 and ess2 is periodic with period 2 . (This decomposition is unique i 1 /2 is irrational.) Moreover, it follows from (3.3) and (3.6) that ess1 and ess2 will be very small (as measured by their L2 -norms over one period). Multi-periodic repetitive control as described above would be of questionable value if the gain |G(i )| could have high peaks for lying between the integer multiples of 1 and 2 . Fortunately, this is not the case: good upper bounds for |G(i )| can be found, for [l , u ], in spite of G possibly having poles close to the imaginary axis. For lack of space, we do not discuss these bounds here.

References
[1] B.A. Francis and W.M. Wonham, The internal model principle for linear multivariable regulators, Appl. Math. Optim. 2 (1975), pp. 170194. [2] T. Georgiou and M.C. Smith, w-Stability of feedback systems, Systems & Control Letters 13 (1989), pp. 271277. [3] T. Georgiou and M.C. Smith, Graphs, causality and stabilizability: linear, shift-invariant systems on L2 [0, ), Mathematics of Control, Signals, and Systems 6 (1993), pp. 195223. [4] S. Hara, Y. Yamamoto, T. Omata and M. Nakano, Repetitive control system: A new type servo system for periodic exogenous signals, IEEE Trans. Aut. Contr. 33 (1988), pp. 659668. 15

[5] M. Green and D.J.N. Limebeer, Linear Robust Control, Prentice-Hall, Englewood Clis, NJ, 1995. [6] T. Inoue, M. Nakano and S. Iwai, High accuracy control of servomechanism for repeated contouring, in Proc. of the 10th Annual Symp. on Incremental Motion Control, Systems and Devices (1981), pp. 258292. [7] T. Inoue, M. Nakano, T. Kubo, S. Matsumoto and H. Baba, High accuracy control of a proton synchrotron magnet power supply, in Proc. of the IFAC 8th World Congress (1981), pp. 216221. [8] H. Logemann, Stabilization and regulation of innite-dimensional systems using coprime factorizations, Analysis and Optimization of Systems: State and Frequency Domain Approaches for Innite-Dimensional Systems, R.F. Curtain, A. Bensoussan and J.L. Lions, eds., LNCIS vol. 185, Springer-Verlag, Berlin, 1993. [9] H. Logemann, R. Rebarber and G. Weiss, Conditions for robustness and nonrobustness of the stability of feedback systems with respect to small delays in the feedback loop, SIAM J. Control and Optim. 34 (1996), pp. 572600. [10] K.L. Moore, Iterative Learning Control for Deterministic Systems, Adv. in Ind. Control, Springer-Verlag, London, 1993. [11] R. Rebarber, Conditions for the equivalence of internal and external stability for distributed parameter systems, IEEE Trans. Aut. Contr. 38 (1993), pp. 994998. [12] E. Rogers, D.H. Owens, Stability Analysis for Linear Repetitive Processes, LNCIS vol. 175, Springer-Verlag, Berlin, 1992. [13] D. Salamon, Realization theory in Hilbert space, Mathematical Systems Theory 21 (1989), pp. 147164. [14] G. Weiss, Transfer functions of regular linear systems, Part I: Characterizations of regularity, Trans. Amer. Math. Society 342 (1994), pp. 827854. [15] G. Weiss, Regular linear systems with feedback, Mathematics of Control, Signals, and Systems 7 (1994), pp. 2357. [16] G. Weiss and R.F. Curtain, Dynamic stabilization of regular linear systems, IEEE Trans. Automatic Control, to appear. [17] Y. Yamamoto, Learning control and related problems in innitedimensional systems, in Essays on Control: Perspectives in the Theory and its Applications, editors: H.L. Trentelman and J.C. Willems, Birkh auser, Boston, 1993, pp. 191222.

16

Anda mungkin juga menyukai