Anda di halaman 1dari 389

LINEAR TIME-INVARIANT

SYSTEMS

MARTIN SCHETZEN
Northeastern University

IEEE
IEEE PRESS

A John Wiley 81 Sons, Inc., Publication

www.ebook3000.com
Copyright 0 2003 by The Institute of Electrical and Electronics Engineers. All rights reserved.
Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise,
except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without
either the prior written permission of the Publisher, or authorization through payment of the
appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers,
MA 01923, 978-750-8400, fax 978-750-4740, or on the web at www.copyright.com. Requests to
the Publisher for permission should be addressed to the Permissions Department, John Wiley &
Sons, Inc., 11 1 River Street, Hoboken, NJ 07030, (201) 748-601 1, fax (201) 748-6008, e-mail:
permreq@wiley.com.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in
preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be suitable
for your situation. You should consult with a professional where appropriate. Neither the publisher nor
author shall be liable for any loss of profit or any other commercial damages, including but not limited to
special, incidental, consequential, or other damages.

For general information on our other products and services please contact our Customer Care Department
within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print,
however, may not be available in electronic format.

Library of Congress Cataloging-in-Publication Data is available.

Schetzen, Martin.
Linear time-invariant systems/
Martin Schetzen
p. cm.
Includes index.
ISBN 0-47 1-23145-2 (cloth: alk. paper)

Printed in the United States of America.


10 9 8 7 6 5 4 3 2 1

www.ebook3000.com
IEEE Press
445 Hoes Lane, Piscataway, NJ 08855

IEEE Press Editorial Board


Stamatios V Kartalopoulos, Editor in Chief

M. Akay M. E. El-Hawary M. Padgett


J. B. Anderson R. J. Herrick W. D. Reeve
R. J. Baker D. Kirk S. Tewksbury
J. E. Brewer R. Leonardi G. Zobrist
M. S. Newman

Kenneth Moore, Director of IEEE Press


Catherine Faduska, Senior Acquisitions Editor
John Griffin, Acquisitions Editor
Tony VenGraitis, Project Editor

Technical Reviewers
Rik Pintelon, Yrije Universiteit Brussel
Bing Sheu, Nassda Corporation

www.ebook3000.com
PREFACE

This is a text on continuous-time systems. In our use, a system is defined as an


object with inputs and outputs. An input is some excitation that results in a system
response or output. System theory is the study of the relations between the system
inputs (or stimuli) and the corresponding system outputs (or responses). Two major
categories of system theory are analysis and synthesis. Analysis is the determination
and study of the system input-output relation; synthesis is the determination of
systems with a desired input-output relation. For analysis, signals are used as
inputs to probe the system, and in synthesis the desired output is expressed as a
desired operation on a class of input signals. Thus signals are an important topic in
system theory. However, this is not a text on signal theory. In signal theory, the
object being studied is a signal and systems are chosen to transform the signal into
some desired waveform. In system theory, on the other hand, the object being
studied is a system, and signals are chosen to be used either as probes of the
system or as the medium used to express system input-output relation. Thus the
discussion of signals in this text will be in terms of their application in system
theory.
System theory lies at the base of science because system theory is the theory of
models, and a basic concern of science is the formation and study of models. In
science, one performs experiments and observes some of the quantities involved. A
model involving these quantities is then constructed such that the relation between
the quantities in the model is the same as that observed. The model is then used to
predict the results of other experiments. The model is considered valid as long as
experimental results agree with the predictions obtained using the model. If they do
not agree, the model is modified so that they do agree. The model is continually
improved by comparing a large variety of experimental results with predictions
obtained using the model and modifying the model so that they agree. One does
ix

www.ebook3000.com
X PREFACE

not say that the refined model represents reality; rather, one only claims that experi-
ments proceed in accordance with the model. In this sense, science is not directly
concerned with “reality.” The question of reality is addressed in philosophy, not
science. However, there are areas of science and philosophy which do influence each
other. Some of these are briefly mentioned in our discussion of system models.
As an illustration, the electron is a model of an object that has not been observed
directly. In an attempt to predict certain experiments, the electron was fist modeled
as a negatively charged body with a given mass which moves about the nucleus of
an atom in certain orbits. This model of the electron helped to predict the results of
many experiments in which the atom is probed with certain inputs such as charged
particles and the output is the observed scattered particles. This model also helped
predict the results of experiments in which the atom is probed with electromagnetic
fields and the output is the observed spectra of the radiation from the atom.
However, to predict the results of other experiments, this model of the electron
had to be modified. The model of the electron has been modified by giving it
spin, a wavelength, and other properties. Does the electron really exist? Science
does not address that question. Science just states that experiments proceed as if
the electron exists.
The modem development of engineering and science requires a deeper under-
standing of the basic concepts of system theory. Consequently, rather than an appli-
cations-oriented presentation, basic concepts and their system interpretation are
emphasized in this text. The chapter problems are to help the reader gain a better
understanding of the concepts presented. To study this text, the student need not
know mathematics beyond basic calculus. Any additional required mathematical
concepts are logically developed as needed in the text. Even so, all the mathematics
used in the text is used with care. Mathematical rigor is not used; that is the province
of the mathematician. However, mathematics is used with precision. For example,
the impulse is not something with zero width and infinite height. The accurate
development of the impulse presented also lends greater insight to its various appli-
cations discussed in the text. The careful discussion and application of mathematics
results in the student having a better appreciation of the role of mathematics and a
more sophisticated understanding of its application in science and engineering.
Linear systems from a functional viewpoint is logically developed in this text.
Each topic discussed lays the basis and motivation for the next topic. In order that
the development be consistent with a systems orientation, many new results and also
new derivations of classic results from a systems viewpoint are included in this text.
Thus, many topics such as the Fourier and Laplace transforms and their inverse are
not just stated. Rather, I have developed new methods to motivate and derive them
from system concepts that had been developed previously.
The text begins with a discussion of systems in general terms followed by a
discussion and development of the various system classifications in order to motivate
the approach taken in their analysis. The time-domain theory of continuous-time
linear time-invariant (LTI) systems is then developed in some depth. This develop-
ment leads naturally into a discussion of the system transfer function, gain, and
phase shift. This lays the basis for a development of the Fourier transform and its

www.ebook3000.com
PREFACE Xi

inverse together with its system theory interpretation and implications such as the
relation between the real and imaginary parts of the system transfer function.
The discussion of the Fourier transform and its inverse motivates the development
of the bilateral Laplace transform and a full discussion of its system interpretation.
One important class of systems which is analyzed is that of passive LTI systems.
Although, as discussed in the text, there is no physical law that requires a system to
be causal, it is shown that a passive LTI system must be causal. Constraints that the
impedance function must satisfy are then obtained and interpreted.
A new approach to the unilateral Laplace transform is presented by which the
bilateral Laplace transform can be used in the transient analysis of LTI systems. The
s-plane viewpoint is then used to discuss basic filter analysis and design techniques.
The discussion of the s-plane viewpoint of systems concludes with the analysis of
feedback systems and their stability, interconnected systems, and block diagram
reduction.
Because system theory is the theory of models and the construction of models is
one of the main objectives of science, a discussion of the consistency of models and
some of the paradoxes in LTI system theory that can arise due to improper modeling
is given. The text concludes with an introductory discussion of the state-variable
approach to system analysis and the types of problems for which this approach is
advantageous. Thus the textual material lays a solid foundation for further study of
system modeling, control theory, filter theory, discrete system theory, state-variable
theory, and other subjects requiring a systems viewpoint.
I thank Prof. John Proakis, who was department chairman during the years I spent
writing this text, for his support and assistance. Also, my heartfelt thanks to my wife,
Jeannine, for her encouragement and help in the tedious job of proofreading.

Brookline, Massachusetts MARTINSCHETZEN


October. 2002
CONTENTS
Preface ix

General System Concepts 1

1.1 The System Viewed as a Mapping 1


1.2 System Analysis Concepts 4
1.3 Time-Invariant (TI) Systems 5
1.4 No-Memory Systems 8
1.5 Simple Systems with Memory 17
1.6 A Model of Echoing 22
Problems 27

Linear Time-Invariant (LTI) Systems 33


2.1 Linear Systems 33
2.2 Linear Time-Invariant (LTI) Systems 37
2.3 The Convolution Integral 45
2.4 The Unit-Impulse Sifting Property 48
2.5 Convolution 56
Problems

Properties of LTI Systems 63


3.1 Tandem Connection of LTI Systems 63

V
Vi CONTENTS

3.2 A Consequence of the Commutative Property 67


3.3 The Unit Impulse Revisited 74
3.4 Convolution Revisited 77
3.5 Causality 80
3.6 Stability 85
3.7 System Continuity 90
3.8 The Potential Integral 93
Problems 96

4 The Frequency Domain Viewpoint 105


4.1 The Characteristic Function of a Stable LTI System 106
4.2 Sinusoidal Response 109
4.3 Tandem-Connected LTI Systems 118
4.4 Continuous Frequency Representation of a Waveform 123
Problems 128

5 The Fourier Transform 133


5.1 The Fourier Transform 134
5.2 An Example of a Fourier Transform Calculation 135
5.3 Even and Odd Functions 136
5.4 An Example of an Inverse Fourier Transform Calculation 137
5.5 Some Properties of the Fourier Transform 139
5.6 An Application of the Convolution Property 146
5.7 An Application of the Time- and Frequency-Shift Properties 147
5.8 An Application of the Time-Differentiation Property 149
5.9 An Application of the Scaling Property 152
5.10 A Parseval Relation and Applications 154
5.1 1 Transfer Function Constraints 161
Problems 170

6 The Bilateral Laplace Transform 175


6.1 The Bilateral Laplace Transform 175
6.2 Some Properties of the RAC 187
6.3 Some Properties of the Bilateral Laplace Transform 191
Problems 203
CONTENTS Vii

7 The Inverse Bilateral Laplace Transform 209


7.1 The Inverse Laplace Transform 209
7.2 The Linearity Property of the Inverse Laplace Transform 213
7.3 The Partial Fraction Expansion 216
7.4 Concluding Discussion and Summary 223
Problems 227

8 Laplace Transform Analysis of the System Output 231


8.1 The Laplace Transform of the System Output 23 1
8.2 Causality and Stability in the s-Domain 237
8.3 Lumped Parameter Systems 24 1
8.4 Passive Systems 245
8.5 The Differential Equation View of Lumped Parameter Systems 252
Problems 260

9 S-Plane View of Gain and Phase Shift 265


9.1 Geometric View of Gain and Phase Shift 265
9.2 The Pole-Zero Pair 272
9.3 Minimum-Phase System Functions 279
9.4 Bandpass System Functions 286
9.5 Algebraic Determination of the System Function 29 1
Problems 299

10 Interconnection of Systems 305


IO. 1 Basic LTI System Interconnections 305
10.2 Analysis of the Feedback System 308
10.3 The Routh-Hurwitz Criterion 317
10.4 System Block Diagrams 328
10.5 Model Consistency 336
10.6 The State-Space Approach 343
Problems 350
Viii CONTENTS

Appendix A A Primer on Complex Numbers and Complex


Algebra 353
Appendix B Energy Distribution in Transient Functions 363
Index 367
CHAPTER 1

GENERAL SYSTEM CONCEPTS

1.1 THE SYSTEM VIEWED AS A MAPPING

In scientific usage, a system is defined as an assemblage of interacting things. The


interaction of these things is described in terms of certain parameters such as
voltage, current, force, velocity, temperature, and chemical concentration. For exam-
ple, the solar system is a system in which the things are the sun, planets, asteroids,
and comets, which interact due to the forces that are associated with their gravita-
tional fields. An electronic circuit is another example of a system. The things in an
electronic circuit are the resistors, capacitors, inductors, transistors, and so on, that
interact due to their currents and voltages. The solar system is said to be an auton-
omous system because there is no external input driving the system. On the other
hand, an electronic circuit such as an audio amplifier is a nonautonomous system
because there is an input-the input audio voltagethat drives the system to
produce the amplified output audio voltage.
An input of a nonautonomous system is an independent source of a parameter. A
nonautonomous system output is an input-dependent parameter that is desired to be
known. Figure 1.1-1 is a general schematic representation of a nonautonomous
system with several inputs and several outputs.
For simplicity, we shall restrict our discussion in this text to nonautonomous
single-input single-output (SISO) systems. For our development then, a system
can be defined in a mathematical sense as a rule by which an input x is transformed
into an output y. The rule can be expressed symbolically as

Y= r [XI (1 .I-1)

1
2 GENERAL SYSTEM CONCEPTS

Fig. 1.1-1 Schematic of a system.

The output and the input usually are functions of an independent variable such as
position or time. If they are only functions of time t , Eq. (1.1-1) is written in the form

m = T [ml (1.1-2)

and represented schematically as shown in Fig. 1.1-2.


As an example, in the study of the time variation of the pupillary diameter, d(t),
of a person's eye as a function of light intensity, i(t), the system considered is one
with the output d ( t ) for the input i(t) so that

This is a symbolic representation of the rule that governs the changes in the diameter
of the eye's pupil due to changes in the light intensity impinging on the eye.
As another example, to study changes in the intensity of light, i(t), from a light
bulb due to changes in the voltage, u(t), impressed across its filament, the system is
one in which the input is the filament voltage, which is a function of time, and the
output is the light intensity, which also is a function of time, so that

Although our main consideration in this text will be of nonautonomous systems


with inputs and outputs that are functions of time, it should be noted that they need
not be functions of time. For example, in electrostatics the potential field 4 ( p ) ,
which is a function of position, p , can be considered as the output of a system
with the input being the charge density distribution p(p) and expressed as

4 ( P ) = 1M P ) l (1.1-5)

In this system, both the input charge density and the output potential field are
functions only of position, p . Another example in which the input and output of a
system are functions only of position is a system used to study the deflection of
beams. In such a system, the input is the force on the beamf(p), which is a function
of the position along the beam at which the force is applied, and the output is the

4SYSTEM
X f
"'
Fig. 1.1-2 Schematic of a SISO system.
1.1 THE SYSTEM VIEWED AS A MAPPING 3

beam deflection d ( p ) , which also is a function of position along the beam. Thus the
relation can be expressed as

(1.1-6)

In Eq. (1.1-2), T is called an operator because the output function y(t) can be
viewed as being produced by an operation on the input function x(t). The statement
that y(t) is the response of a system to the input x(t) means that there exists an
operator 1or, equivalently, a rule by which a given output time function y(t) is
obtained from a given input time function x(t).
Another equivalent way of thinking about a system is to consider the input x ( t )
being mapped into the output y(t). This viewpoint can be conceptualized as shown in
Fig. 1.1-3. In the figure, the set of all the possible inputs is denoted by X and the set
of all possible outputs is denoted by Y. As illustrated, the input time functions
denoted by x1 and x, are both mapped into the same output time function denoted
by y,. Note that there can be only one time function resulting from a rule being
applied to a given input. Thus if the relation between the system input and output is
y(t) = ~ ? ( t )the , rule is that the output value at any instant of time to, is equal to the
square of the input value at the same time to. Clearly, for this system there is only
one output for any given input. However, note that many different inputs can
produce the same output. For example, consider the input to be a waveform that
jumps back and forth between +2 and -2. Irrespective of the times at which the
jumps take place, the output will be +4 so that for this system, there are an infinite
number of input waveforms that produce the same output waveform. On the other
hand, an example of a relation that cannot be modeled as a system is one in which
y(t) = [ x ( t ) ] ' / * . This cannot be modeled as a system because, in taking the square
root of x(t), there is no general rule by which the correct sign of y(t) can be known
because if x(to) = 4, is y(to) = +2 or is y(to)= -2? However, if the rule is that the
output is the positive square root of the input, then there is only one possible output
for any given input and the relation can be modeled as a system. In terms of the
mapping concept, a system is said to be a many-to-one mapping because many
different inputs can result in a particular output, but a given input cannot result in
more than one output. Our discussion in this text will center on SISO systems with
inputs and outputs that are functions of time.
The inverse of a given system is one that undoes the mapping of the given system
as shown in Fig. 1.1-4. Note that the system inverse is a system, so that its operator

X Y
Fig. 1.1-3 The mapping of a the operator 1.
4 GENERAL SYSTEM CONCEPTS

Fig. 1.1-4 Schematic of a system inverse.

must be a many-to-one mapping of the set of its inputs, Y , to the set of its outputs,
X.This means that if a system inverse is to exist, the operator of the system must be
a one-to-one mapping of the set of its inputs, X,to the set of its outputs, Y . As an
example, if the system mapping were one as shown in Fig. 1.1-3, there would be no
rule for the system inverse to map its input y , because the correct output, x, or x2,
could not be determined. We thus conclude that a system inverse exists ifand only i f
the system operator is a one-to-one mapping. Clearly, the inverse of a square-law
device for which y(t) = 2 ( t ) and x(t) is any waveform does not exist. However, if
the set of inputs, X,is restricted to time functions that are never negative so that
x(t) 3 0, then the mapping is one-to-one and a system inverse does exist relative to
that restricted class of inputs.

1.2 SYSTEM ANALYSIS CONCEPTS

System analysis is the determination of the rule 1of a given system. If nothing is
known a priori about the given system, then one could only perform a series of
experiments on the system from which a list of various inputs and their correspond-
ing outputs is made. The difficulty with this is manifold. First, it can be shown that it
is theoretically impossible to make a listing of all possible inputs not because of
human frailty but because there are more possible input time functions than it is
theoretically possible to list. A mathematician would say that the set of possible
inputs is not listable or, equivalently, not countable. To circumvent this problem, one
might attempt to make a list of just a judicious selection of possible inputs and their
corresponding outputs. However, such a listing would not characterize the mapping
because the system output due to an input that is not on the list would not be known.
Even if an input were close to one on the list (that is, they are approximately equal),
it could not be concluded that the two corresponding outputs were close to each
other. For example, it might be that the input, as opposed to the one on the list,
caused a relay in the system to temporarily open and thereby result in an entirely
different output. If it were known for the given system that inputs that are close to
each other result in outputs that are close to each other (such systems are called
continuous systems), then such a listing could be used to obtain the approximate
system response to any input that is close to one on the list. We thus note that,
without more knowledge of the system operator, 1,such a listing is relatively
useless. Aside from these problems, a listing would not result in a comprehensive
knowledge of the rule 1because it would be an onerous, if not impossible, task to
deduce the rule 1just from examining a collection of waveforms.
We thus conclude that to do any meaningful system analysis, some a priori
knowledge of the system operator must be known. It is a common problem in
1.3 TIME-INVARIANT (TI) SYSTEMS 5

scientific studies that some a priori knowledge of an object is required in order to


analyze the object. For example, in communication theory, a basic problem is the
extraction of a signal from a noisy version of it. If nothing were known a priori
about the signal or the noise, then nothing can be done because there would be no
known characteristics of the signal or the noise that could be used to extract the
signal from its noise environment. On the other hand, if everything about the signal
were known, then there is no need to extract the signal because it is known. Thus, a
basic problem in communication theory is to determine what is reasonable to know
about the signal and the noise that would be useful in extracting the signal. In system
theory too, we are faced with the problem of determining what is reasonable to know
that would be useful for system analysis. For this, systems are classified based on
certain characteristics that are reasonable to know. Some of the system classifications
that have been found useful are presented and used in this text.

1.3 TIME-INVARIANT (TI) SYSTEMS

One characteristic that has been found useful is time-invariance. A time-invariant


(Tq system is one in which the rule 1by which the system output is obtainedfiom
the system input does not vary with time. For many mechanical and electrical
systems it is reasonable to know whether the system is TI. For example, one can
consider an electrical resistor as a system with the input being the current i(t)
through the resistor and the output being the voltage e(t) across the resistor. The
rule then is e(t) = r . i ( t ) in which r is the value of the resistance. This system is TI if
r does not vary with time. To be realistic, the value r of any physical resistor will
vary with time because physical objects do age and the resistance value, after some
time that could be many years, will be significantly different. However, over any
reasonable interval of time, the value of r can be considered to be constant. We
similarly model many physical systems as being TI if their rules, 1,do not change
over any interval of interest.
Note that in making a model of a physical system, we are not trying to represent it
exactly because the best model of any physical system is the physical system itself.
Rather, the attempt is to make as simple a model as possible for which the difference
between its output and that of the physical system being modeled is acceptably small
for the class of inputs of interest. The model then serves as a basis not only for
calculating responses of the physical system, but also to gain a deeper understanding
of the physical system behavior.
A difficulty with the above-given definition of a TI system is that it is not an
operational definition. An operational definition is one that specifies an experimental
procedure. Thus, an operational definition of a TI system is one which specifies an
experimental procedure by which one can determine whether the system is TI. The
only meaningful definitions in science are operational ones because, at its base,
science is concerned with experimental procedures and results.
6 GENERAL SYSTEM CONCEPTS

To develop an operational definition of time invariance, we first note that if a


system is TI, then for any value of to we have

YO - to) = - to)] (1.3-1)

This equation states that if the rule does not change with time, then the response to
x(t) shifted by to seconds must be the output y(t) also shifted by the same amount, to.
It is clear that if Eq. (1.3-1) is satisfied for a given system no matter what the input
x(t) or value of to used, then the system is TI. We thus can state an operational
definition of time invariance: A system is time-invariant (TI) ifEq. (1.3-1) is satisfied
for any input, x(t), and any time shift, to.
To illustrate this operational definition, consider the resistor network shown in
Fig. 1.3-1. As shown, the resistance of each resistor varies with time. The system
input is x(t), which is an applied voltage, and the system output y ( t ) is the voltage
across the resistor rb(t).The relation between the system output y(t) and its input x(t)
is

(1.3-2)

This is the rule by which the system response is obtained from its input. Clearly, this
system is TI if the resistor values do not vary with time. However, is it possible for
the resistor values to vary with time and yet the system be TI? We shall use the
operational definition of TI to answer this question. For this, we must show that the
system is TI if the system response to x(t - to) is equal to y(t - to) for any x(t) and
for any value of to. This can be viewed schematically as shown in Fig. 1.3-2. The box
that is labeled ( t - to) represents an ideal delay system for which the output is its
input delayed by to seconds. The output z(t) of the top block diagram is the system
response to the input x(t - to), while the output of the bottom block diagram is
y(t - to). To show that the system is TI, we must show that z(t) = y(t - to) for any
input x(t) and for any delay to.
For our illustrative example, we have from Eq. (1.3-2) that the system response to
the input x(t - to) is

(1.3-3)

Fig. 1.3-1 A resistor network.


1.3 TIME-INVARIANT (TI) SYSTEMS 7

Fig. 1.3-2 Illustrating TI determination.

On the other hand,

(1.3-4)

To understand the difference between these two equations, note that Eq. (1.3-3) is
obtained by only shifting the input x(t) by to seconds whereas Eq. (1.3-4) is obtained
by shifting the input x(t) and the system rule 1by to seconds in accordance with Fig.
1.3-2. The system is TI if z(t) = y ( t - to) for any input x(t) and for any value of to.
From Eqs. (1.3-3) and (1.3-4) we observe that this is true for any x ( t ) only if

(1.3-5)

for any value of to. Our analysis is now made easier by considering the reciprocal of
each side of Eq. (1.3-5):

r,(t> ru(t - to)


f l (1.3-6)
Yb(t) + = rb(t -

Equation (1.3-6) is satisfied for any value of to only if

(1.3-7)
Yb(t) =

in which c is a constant. To see this, assume that the value of the ratio in Eq. (1.3-7)
at the times t = t , and t = t2 differ. If this were so, then Eq. (1.3-6) would not be
satisfied for t = t , and to = t , - t2.
We thus note that the system of our example is TI only if Eq. (1.3-7) is satisfied. If
Eq. (1.3-7) is satisfied, then, from Eq. (1.3-2), the relation between the system input
and output can be expressed as

The fact that the system is TI is easily seen because it is clear from Eq. (1.3-8) that
the rule by which y ( t ) is obtained from x ( t ) does not vary with time. Note that a
system can be TI even though the elements of which the system is composed vary
8 GENERAL SYSTEM CONCEPTS

with time. To be a TI system just means that the rule by which the output is obtained
from the input does not vary with time. A circuit in which one or more elements vary
with time is called a time-varying circuit. However, as we observe from our example,
the circuit can be a time-varying circuit while the system defined from the circuit is
time-invariant. Care must be taken not to confuse circuit theory and system theory.
Note hrther that if the system output were defined as the current i(t) through the
resistors in Fig. 1.3-1 instead of the voltage y(t), then the relation between the input
x(t) and output i(t) would be

(1.3-9)

+
Such a system would be TI only if r,(t) rb(t)= constant. Thus the system with the
output y(t) can be time-invariant, while the system with the output i(t) is time-
varying. We thus observe that whether the defined system is time-invariant depends
on what is defined as the system input and the system output. This is so because it is
only the rule by which inputs are mapped into outputs that determines whether the
system is TI, and the rule depends on what is called the input and what is called the
output.
We shall be concerned almost exclusively with time-invariant systems in this text.

1.4 NO-MEMORY SYSTEMS

Another characteristic that is reasonable to know is whether a given system is a no-


memory system. A no-memory system is defined as onefor which the output value at
any time to, y(to), depends only on the input value ut the same time, x(to).Thus a
square-law device in which y(t) = x2(t)is a time-invariant no-memory system. On
the other hand, the system with the output y(t) = 2 ( t - 3), which is a square-law
device with a 3-second delay, is time-invariant system with memory. It is not a no-
memory system because the output at any given time, to, depends on the input at the
time t = to - 3, which is 3 seconds before to and not at the time to.
We shall discuss only time-invariant no-memory systems in this section. Because
the rule 1does not vary with time for such systems, the output amplitude at any time
is a given function only of the input amplitude at the same time so that we can
express the system input-output relation in the form

(1.4-1)

in which x is the input amplitude and y is the output amplitude. Equation (1.4-1) is
called the transfer characteristic of the no-memory system. The function f in Eq.
(1.4-1) is a rule by which an amplitude x is mapped into an amplitude y . As such it
must be, as discussed in Section 1.1, a many-to-one mapping. The procedure for
determining the response y(t) of a no-memory system simply is to determine, at each
time instant, the output amplitude from the input amplitude at that instant in accor-
1.4 NO-MEMORY SYSTEMS 9

dance with Eq. (1.4-1). There are several types of no-memory systems of importance
which are discussed below. Also included in this discussion is the definition of some
notation and basic functions of importance for our subsequent discussions.

1.4A The Ideal Amplifier


The ideal amplifier is a no-memory system for which the output is K times its input.
The constant K is called the gain of the ideal amplifier. The relation between the
output and the input thus is

and the transfer characteristic of the ideal amplifier is

y=kk (1.4-3)

The graph of this relation is simply a straight line with a slope of K as shown in
Fig. 1.4-1.
In system representations, the ideal amplifier is represented by either of the block
diagrams shown in Fig. 1.4-2.

1.48 The Half-Wave Rectifier


The half-wave rectifier is a no-memory system for which the transfer characteristic is

(1.4-4)

A graph of this relation is shown in Fig. 1.4-3.


Equation (1.4-4) can be written more compactly in the form

y = Eocu(x) (1.4-5)

Fig. 1.4-1 Ideal amplifier transfer characteristic.


10 GENERAL SYSTEM CONCEPTS

X ( t 4 Y O
(4 (b)
Fig. 1.4-2 Block diagrams of an ideal amplifier.

in which

ifx<O
(1.4-6)
ifx>O

is called the unit step function.’ We shall find the function u(x) to be very useful in
our study of systems.
As an illustration of the half-wave rectifier operation, we determine the output y(t)
when the input x(t) is a sinusoid. The sinusoid is a waveform given by Eq. (1.4-7)
and illustrated in Fig. 1.4-4.

x(t) = A sin(wt) (1.4-7)

This is a periodic waveform with a fundamental period equal to T .


A periodic waveform is one that repeats itself so that some time shift of the
waveform results in the same waveform. That is, there are values of z for which

x(t + z) = x(t) (1.4-8)

Note that a periodic waveform must extend from t = -00 to t = 00 because other-
wise no shift ofx(t) will result in the same time function. The positive values of z for
which Eq. (1.4-8) is satisfied are called periods of x(t); the smallest positive value of
z for which Eq. (1.4-8) is satisfied is called the fundamental period of the waveform
x(t). In Fig. 1-4-4, the fundamental period of the sinusoid is T , while 2T, 3T, . . . are
simply periods of the sinusoid.
The value of T for the sinusoid can be determined by substituting Eq. (1.4-7) into
Eq. (1.4-8) to obtain

A sin[w(t + z)] = A sin(ot) (1.4-9)

Y slope = K

0 X

Fig. 1.4-3 Half-wave rectifier characteristic.

’Some texts define u(0) = 1; others define u(0) = 0. I’ve defined u(0) = 1/2 not to be different but rather
for consistency in system theory. A reason for my choice will be given in Section 2.4.
1.4 NO-MEMORY SYSTEMS 11

Fig. 1.4-4 Graph of Eq. 1.4-7.

+
Since sin(+ 2nn) = sin(+) in which n is an integer, this equation can be satisfied
only if, for all values o ft ,

wt + wz = wt + 2nn
or

wz = 2nn, n = f l , f 2 , f 3 , . .. ( 1.4-10)

For the sinusoid, we thus obtain that the values of z for which Eq. (1.4-9) is satisfied
are

2nn
z=-, n = f l , f 2 , f 3,... (1.4-1 1)
Q

Now, the radian frequency w can be expressed as

w = 2nf (1.4-12)

in which f is the frequency in hertz (which is cycles per second). Note that because
w is the radian frequency in radians per second, the factor 271 must have the dimen-
sions of radians per cycle. It is not dimensionless! Substituting Eq. (1.4-12) in Eq.
(1.4-1l), we obtain
n
T=- n = f l , f 2 , f 3 , . .. (1.4-13)
f’
The smallest positive value of z is the hndamental period so that the hndamental
period of the sinusoid is

1
T=- ( 1.4-14)
f
Another important relation concerning the sinusoid is that between a time shift
and a phase shift. An expression for the sinusoid delayed by to seconds is, from Eq.
(1.4-7),

x(t - to)= A sin[w(t - to)] (1.4-15)


12 GENERAL SYSTEM CONCEPTS

This also can be expressed in the form of a phase shift as

x(t - to) = A sin[ot + 01 (1.4-1 6)

in which the phase shift is equal to

0 = -0to (1.4-17)

We thus note that for a sinusoid a delay of to seconds is equivalent to a phase shift of
-coto radians. Later in this text we shall express certain waveforms, x(t), as a linear
combination of sinusoids:

x(t) =
n
Cn sin(wn + 4,J (1.4-18)

Then, the expression for x ( t ) delayed by to seconds is

(1.4- 19)

in which

en = (1.4-20)

A graph of the sinusoidal phase shift versus frequency, w, is thus a straight line
passing through the origin with a slope equal to -to, which is the time shift of x(t).
We now determine the output, y(t), of the half-wave rectifier for a sinusoidal
input, x ( t ) . In accordance with Eq. (1.4-4), it is

(1.4-2 1)

Thus, for the sinusoidal input given by Eq. (1.4-7) the output is

Figure 1.4-5 is a graph of Eq. (1.4-22). This no-memory system is called a half-wave
rectifier because, as seen from Fig. 1.4-4, the output is the input with the negative
half eliminated.
1.4 NO-MEMORY SYSTEMS 13

Fig. 1.4-5 Graph of Eq. 1.4-22.

1.4C The Full-Wave Rectifier


The full-wave rectifier is a no-memory system for which the system transfer char-
acteristic is

(1.4-23)

A graph of this relation is shown in Fig. 1.4-6.


Note that the transfer characteristic of the full-wave rectifier also can be expressed
in the form

Y = Klxl (1.4-24)

In Eq. (1.4-24), the vertical bars indicate the absolute value of x so that

-x ifx<O
1x1 = (1.4-25)
x ifxbO

In essence then, the output of the full-wave rectifier is K times the magnitude of the
input. For the sinusoidal input x(t) given by Eq. (1.4-7), the full-wave rectifier output
is

y(t) = KIA sin(ot) I = K / AI 1 sin(ot) I (1.4-26)

A graph of y ( t ) is shown in Fig. 1.4-7.


This no-memory system is called a full-wave rectifier because both the positive
and negative portions of the input waveform have been rectified (i.e., made to be the
same sign) as shown in Fig. 1.4-6. For a sinusoidal input with a fundamental period
of T , note that the fundamental period of the full-rectifier output is T/2 while that of
the half-wave rectifier output is T . Also note that for the same sinusoidal input, the
average value of the full-wave rectifier output is twice that of the half-wave rectifier
output.

slope = K

0 X

Fig. 1.4-6 Full-wave rectifier characteristic.


14 GENERAL SYSTEM CONCEPTS

0 T t
Fig. 1.4-7 Graph of Eq. 1.4-26.

1.4D The Hard-Limiter


The hard-limiter is a no-memory device for which the system transfer characteristic
is

-yo ifx < 0


ifx=O (1.4-27)
yo ifx>O

A graph of this relation is shown in Fig. 1.4-8.


The system transfer characteristic can also be expressed in the form

in which sgn is called the Signum function and is defined as

-1 ifx<O
(1.4-29)
1 ifx>O

This no-memory system is called a hard-limiter because the output is limited to two
nonzero values. Thus the output y(t) for an input x(r) is

-yo whenx(t) < 0


0 when x(t) = 0 (1.4-30)
yo when x(t) > 0

For example, if the input x(t) is the sinusoid given by Eq. (1.4-7), the corresponding
output is as shown in Fig. 1.4-9.

Y
Yo
X
-YO
1.4 NO-MEMORY SYSTEMS 15

-
- -T
Yo
T t

Fig. 1.4-9 Sinusoidal response of the hard limiter.

It is seen that the sinusoidal response of the full-wave rectifier is a square-wave


with the same fundamental period as that of the input sinusoid. This result is some-
times used to generate a square-wave with a desired fundamental period.

1.4E The Soft-Limiter


The soft-limiter is used to approximately model the characteristics of many elec-
tronic amplifiers. It is a no-memory system with the transfer characteristic shown in
Fig. 1.4-10 and is given by Eq. (1.4-31):

-yo for x d -xo


(1.4-3 1)

We’ll illustrate the soft-limiter operation by determining its output y(t) for the
sinusoidal input x(t) given by Eq. (1.4-7). First consider the case for which 1-A 1 < xo.
Note that then Ix(t)l < xo so that the input amplitude traverses only the central
portion of the soft-limiter characteristic for which

(1.4-32)
= --A
Yo sin(ot)
XO

We thus note that if Ix(t)l < xo, then the soft-limiter can be modeled as an ideal
amplifier with a gain K =yo/xo. Note that the gain is equal to the slope of the
transfer characteristic.

yo ..............

............ XO
x ? -YO

Fig. 1.4-10 Soft-limiter characteristic.


16 GENERAL SYSTEM CONCEPTS

For input amplitudes larger than xo, limiting occurs. To examine its effect, again
consider the input to be the sinusoidal waveform given by Eq. (1.4-7) but with
IA I > xo. For this input, the resulting output waveform f i t ) is shown in Fig. 1.4-11.
The output y(t) is equal to yo whenever x(t) > xo so that, for example, we note
that y(t) = yo for t , < t < t2 and y(t) = -yo for t3 < t < t4. In between these times
Ix(t)l < xo so that y(t) in these intervals is given by Eq. (1.4-32). To complete the
determination ofy(t), the times t,, t2, t 3 ,and t4 must be determined. First, the time t ,
is the time at which y(t) first has the value yo so that this must be the time x(t) first
has the value xo. Consequently, x ( t l ) = xo. Substituting the expression for x(t) given
by Eq. (1.4-7), we then have

x(t1) = A sin(ot,) = xo (1.4-33)

The solution of this equation for tl is

(1.4-34)

A solution of this equation always exists because xo/ IA I < 1. Once t, is known, the
other times are easily determined by using the symmetry of the sinusoid. For exam-
ple, note that the time interval between t = 0 and t = t , is equal to the time interval
between t = t2 and the time of the first zero crossing to its right, which is at t = T i
where T is the hndamental period of the sinusoid. Thus

so that

t2 = i T - t , (1.4-36)

in a similar manner

t3 = i T + t, and t4 = T - t, (1.4-37)

All the transition times of y(t) can be determined in a similar manner. From this, the
length, L, of a clipped interval can be determined because

L = t* - t , = $2- - 24 (1.4-38)

Fig. 1.4-11 Sinusoidal response of the soft limiter.


1.5 SIMPLE SYSTEMS WITH MEMORY 17

To illustrate these results, consider an experiment in which it is observed that the


length of the clipper interval is L = 8 milliseconds for a sinusoidal input with an
amplitude A = 10 and frequencyf = 50 Hz. What is the value of x, above which the
soft-limiter clips the input waveform? To solve this, we first obtain from Eq. (1.4-14)
that

1 1
T = - = - = 20 x lop3 s = 20 ms (1.4-39)
f 50
Then, from Eq. (1.4-38) we obtain

t1=?
'['? T - L ] :['2-"- 8 ] m s = l m s
=- (1.4-40)

Thus, from Eq. (1.4-33)

x,, = Asin(ot,) = lOsin(2 x n x 50 x lop3) = 3.09 (1.4-4 1)

The no-memory systems and the functions discussed above are some of the
important ones worth keeping in mind. Other no-memory systems can be analyzed
in a manner similar to the technique illustrated above because a no-memory system
is a system for which the output value at any instant is simply a function of the input
value at the same instant. Remember also that a function,f(.), is just a rule by which
a value x is mapped into the value f (x). This view of a function will be important in
our latter discussions.

1.5 SIMPLE SYSTEMS WITH MEMORY

We shall examine some simple systems with memory in this section in order to gain
an understanding of some of the basic problems that system memory introduces in
the determination of the system response to a given input. First, a system with
memoly is simply dejined as one that is not a no-memory system. Systems with
memory are ubiquitous in the physical world. As a simple illustration, consider the
pupillary system described by Eq. (1.1-3). That system clearly is one with memory.
This is made evident by considering what occurs when you walk from a sunny place
into a dimly lit room. In the sunny place, the diameter of your eye's pupil is small to
limit the light intensity at the retina. When you walk into the dimly lit room, a bit of
time is required for the diameter of your pupil to increase sufficiently for there to be
enough light at your retina. That is, the present diameter of your eye pupil is
dependent not just on the present light intensity, but also upon the past light inten-
sity. A system with memory is one for which input values at times that are not the
present time affect the present value of the output.
18 GENERAL SYSTEM CONCEPTS

x q m + $ @

Fig. 1.5-1 Block diagram of a delay system.

1.5A The Delay System


A delay system is one for which the output y(t) is the input x ( t ) delayed by to seconds
so that y(t) = x(t - to).This is one of the simplest systems with memory. A block
diagram representation of a delay system is shown in Fig. 1.5-1. It is not a no-
memory system because the output at any time does not depend on the input at that
time, but rather depends on the input to seconds earlier.

1.58 The Ideal Integrator


The response y ( t ) to the input x(t) of the ideal integrator is

y(t)=K f -co
x(a) do (1 5 1 )

The output of the ideal integrator, according to Eq. (1.5-l), is K times the area under
the infinite past of the input x(t). A block diagram representation is shown in Fig.
1.5-2.
To illustrate this, we shall determine the output y(t) when the input is the rectan-
gular pulse given by Eq. (1.5-2) and shown in Fig. 1.5-3.

A ifO<t<T
x(t) = (1 5 2 )
0 otherwise

Equation (1.5-2) can be written more compactly in the form

x(t) = A ( $ ) (1.5-3)

in which r(t) is the unit rectangle function, which is defined as

1 ifO<t<l
if t = O or 1 (1.5-4)
0 otherwise

This function is useful in expressing a rectangle of any width and location. For
f it').
example, consider the expression r - To determine its graph, we use the rule

Fig. 1.5-2 Block diagram of an ideal integrator.


1.5 SIMPLE SYSTEMS WITH MEMORY 19

O T t
Fig. 1.5-3 The rectangular input pulse for the example.

(or equivalently, the mapping) given by Eq. (1.5-4), from which we have that
r ( 7 ) = 1 whenever the value of whatever is in the parentheses is between
zero and one. Consequently we have that r f ito)
~
t - to
= 1 whenever 0 < -< 1
T
or, for T > 0, whenever 0 < t - to < T or, equivalently, whenever to < t < T to. +
The graph, shown in Fig. 1.5-4, is seen to be a pulse with an amplitude of one and a
width of T which starts at t = to.
To determine the output y(t) for the example, we use Eq. (1 S-l), which states that
for any given value o f t , y(t) equals K times the area under the curve of x(a) in the
interval --oo < a < t. Note that t is a constant in the integration. First we note that
the graph ofx(a) is as shown in Fig. 1.5-5. It is identical to the graph of Fig. 1.5-3
because the same rule x(.) is applied to t and to (T.This means that the same value of
the function is obtained when t and a have the same value because then the value of
the expression in the parentheses is the same. It is important to fully understand the
concept that a function is simply a rule and also how to use the rule to obtain the
graph in a manner similar to that used to obtain the graphs of Fig. 1.5-4 and Fig. 1.5-
5.
We determine y(t) by evaluating Eq. (1.5-1) for various ranges of t. First, for
t < 0 note that the integral is only over negative values of o.Thus y(t) = 0 for t < 0
because x(a) = 0 for a < 0. Now for the range 0 < t < T , we note from Fig. 1.5-5
that y(t) which is equal to K times the area under x(a) in the range -a< a < t, is
simply K times the area of a rectangle with height A and width t so that y(t) = KAt.
Finally, for the range t > T , note that K times the area of x(a) in the range
--oo < a < t is just K times the area of the rectangle with height A and width T
so that y(t) = KAT. In summary, we have obtained

for t < 0
(1.5-5)

Fig 1.5-6 is a graph of Eq. (1 5 5 ) .

1 '.

Fig. 1.5-4 f
Graph of r -
;to).
20 GENERAL SYSTEM CONCEPTS

O T cs
Fig. 1.5-5 Graph of x ( ~ ) .

The expression for y(t) can be written more compactly with the use of the unit-
step function defined in Section 1.4B as

y(t) = KA[tu(t) - (t - T)u(t - T ) ] (1.5-6)

With the use of the unit-rectangle function, another compact expression for y(t) is

y(t) = K A [ t r ( i ) + Tu(t - T ) ] (1.5-7)

Note that we have expressed y(t) in four different forms [the graph of Fig. 1.5-6 and
the expressions of Eqs. (1.5-5), (1.5-6), and (1.5-7)]. Normally, there are many forms
available to express a given function. The form of the expression to use is that which
is most convenient. A graph is not less “mathematical” than an equation. Note that
the graph of the input and not its equation was used to obtain y(t) because it was
more convenient to use that form of the input expression for the integration.
The systems discussed to this point can be used as subsystems of a larger system
such as the system shown in Fig. 1.5-7. In the figure, y(t) is the output of the
summer. The symbol @ represents a summer for which the output is equal to the
sum of its inputs. Note that the arrows represent the direction of signal flow; without
them, which signals are inputs and which are outputs would not be known. It is
important that they be included in every block diagram. From the block diagram we
then have

y(t) = x(t) - 2x(t - to) + x(t - 2tO) (1.5-8)

Also, using Eq. (1.5-l), the expression for z(t) is

~ ( t=) 3 J’
--oo
~ ( a da
) (1.5-9)

The output z(t) can be determined using Eq. (15 9 ) once y(t) is determined from
Eq. (1.5-8). Of course, Eq. (1.5-8) can be substituted in Eq. (1.5-9) to obtain an
equation for z(t) in terms of only x(t). At times this is desirable; other times it can

O T t

Fig. 1.5-6 Graph of y( t ) for the example.


1.5 SIMPLE SYSTEMS WITH MEMORY 21

Fig. 1.5-7 System for the example

lead to a cumbersome expression, which makes it difficult to fully understand and


use the expression.
For example, the system unit-step response, which is the system response to the
unit-step input x ( t ) = u(t), is easily determined by first determining y(t) from Eq.
(1.5-8):

+
y(t) = u(t) - 2u(t - to) u(t - 2to) (1 5 1 0 )

With the definition of the unit-step function, this expression is easily seen to be

o+o+o=o for t < 0


1+0+0=1 for 0 < t < to
(1.5-11)
1-2+0=-1 forto<t<2to
1-2+1=0 for t > 2t0

A graph ofy(t) is shown in Fig 1.5-8. Now z(t) is easily obtained with the use of Eq.
(1.5-9) by following the integration procedure discussed in Section 1.5B. A graph of
z(t) is shown in Fig. 1.5-9.
A few different forms in which z(t) can be expressed analytically are

1O for t < 0
for 0 .= t < to
z(t) = ( 1.5- 12)
(6to - 3t) for to < t < 2t0
for t r 2t0

z(t) = 3 t ~ ( t-) 6(t - to)u(t - to) + 3(t - 2to)u(t - 2to) (1 5 1 3 )

i=b- -1

Fig. 1.5-8 Graph of y(f).


22 GENERAL SYSTEM CONCEPTS

Fig. 1.5-9 Graph of z(t).

and also

(1.5-14)

It is worth verifying the three different forms given in order to become more adept in
the use and visualization of the unit-step and the unit-rectangle hnctions.

1.6 A MODEL OF ECHOING

We’ll illustrate the use of the concepts developed to this point by developing a
simplified model of a physical situation in which echoing occurs. Every model of
a physical system is, to some extent, a simplified model because there always are
factors that are not included. For example, in developing a model of the Earth’s
motion about the sun, the effect of the gravitational pull of the other planets and the
asteroid belt on the other side of the planet Mars is often ignored. These forces are
small and so only slightly affect the Earth’s motion. If, for example, the effect of
Mars on the Earth’s motion were desired, it could be included by considering its
small effect on the Earth’s motion by a separate perturbation calculation. To
construct a model of a physical system, one must first determine which factors to
include and which to ignore. Very simplified models often are used to understand
some basic phenomenon such as the model we’ll develop to obtain a basic under-
standing of echoing. Complicated models that include many factors often obscure
the basic phenomenon but are constructed to obtain better numerical results. Thus,
the determination of the factors to include in a model is governed by the purpose for
which the model is being constructed and the relative size of the effect of each factor.
Our ability to construct simplified models of many phenomena has resulted in our
ability to determine the basic physical laws that govern those phenomena. For
example, the basic laws of gravitation were able to be determined because simplified
models in which only two bodies are interacting could be constructed. The simplified
model of the Earth moving about the sun obtained by ignoring the gravitational field
due to all the other celestial bodies is such an example. If the gravitational forces
were such that the effect of the other celestial bodies could not be ignored, then the
simplest system model would be so complicated that the basic laws of Newton
probably would never have been discerned.
1.6 A MODEL OF ECHOING 23

Fig. 1.6-1 Block diagram model of echoing.

To develop a simplified model of a physical situation in which echoing occurs,


consider the situation in which I am standing with my back against the blackboard of
a classroom. As I speak, my voice travels to the rear wall of the classroom from
which it is reflected and travels back to the blackboard from which it again is
reflected. What travels to the rear wall is then the sum of my voice and that
which is reflected from the blackboard. Note that I am neglecting any reflections
from the side walls, floor, ceiling, or any objects in the classroom such as chairs or
students. Also, I am ignoring any distortions of my voice waveform as the speech
wave travels back and forth. This is why the model being developed is a simplified
model.
The simplified model of the classroom situation is obtained by constructing a
block diagram from the description given. First, the waveform directly in front of
me, y(t), is the sum of my voice waveform, x ( t ) , and the wave reflected from the
blackboard, z(t). This is represented as shown in Fig. 1.6-1, in which the symbol CB
represents a summer for which the output is equal to the sum of its inputs. Thus
y(t) = x ( t ) + z(t) in Fig. 1.6-1. Note that the arrows represent the direction of signal
flow; without them, which signals are input and which are outputs would not be
known. It is important that they be included in every block diagram.
Now, the sound wave, y(t), travels with a velocity of approximately 335m/s
(which is approximately 1100 f/s) to the rear classroom wall. Let the time required
to reach the rear wall be to seconds. The wave that reaches the rear wall thus is
q(t) = y(t - to). This is the output of the top delay system shown in Fig. 1.6-1. The
wave reflected is r(t) = K,q(t), where K , is the acoustical coefficient of reflection
from the rear classroom wall. This wave then takes to seconds to travel back to the
blackboard. Thus the wave that reaches the blackboard is s(t) = r(t - to).The wave
reflected from the blackboard is z(t) = K2s(t - to), where K2 is the blackboard
acoustical coefficient of reflection. The waveform z(t) is added to my voice wave-
form x ( t ) to form the wave y(t) directly in front of me which travels to the rear of the
classroom. Figure 1.6-1 is a block diagram of this simplified model of echoing.
Note that the system input is my voice, x(t). However, no output is specified. The
output of any system is simply the waveform in which we are interested. If the
waveform that is traveling from me to the students in class is of interest, then y(t)
would be chosen as the output and Fig. 1.6-1 could be redrawn as shown in Fig. 1.6-2.

Yft),
I

Fig. 1.6-2 Block diagram model of echoing.


24 GENERAL SYSTEM CONCEPTS

This block diagram can be simplified by noting that

) Kp(t)
~ ( t=
= K2r(t - to)
(1.6-1)
= KIK2dt - to)
= KIK2y(t - 2to)

Consequently, an equivalent block diagram is as shown in Fig. 1.6-3, where


K = K1K2.
The system shown is called a feedback system because some operation on the
output is fed back to the input. Feedback systems are an important class of systems
because a model of many physical systems is a feedback system. Feedback systems
will be discussed in more detail later in the text.
We’ll use the system of Fig. 1.6-3 to determine the output y(t) to two different
inputs x(t). The objective of this determination is not only to illustrate how to
determine the output, but also to obtain a clearer view of the role of memory in
the system. For all our calculations we shall assume that the system initially is at rest.
This means that all the system waveforms such as y(t) and z(t) in Fig. 1.6-3 are zero
before an input is applied to the system. From Fig. 1.6-3, the waveforms are related
by

= x(t> + z(t> (1.6-2a)

~ ( t=) Ky(t - to) (1.6-2b)

Example 1 As the first example, the output y(t) to the rectangular input given by
Eq. (1.6-3) will be determined.

(1.6-3)

Because the system initially is at rest and the input is zero for t < 0, we have that y ( t )
and z(t) are zero for t < 0. Mathematically, as we have discussed, the statement
y(t) = 0 for t < 0 means that whenever the value of whatever is in the parentheses
of the function y(.) is negative, the value of the function is zero. Thus y(t - 2t0) = 0
whenever t - 2t0 < 0 or when t < 2t0 so that from Eq. (1.6-2b) we have that
z(t) = 0 for t < 2t0. This result can be viewed physically from Fig. 1.6-3 by

y(t)
I ’

Fig. 1.6-3 Equivalent block diagram.


1.6 A MODEL OF ECHOING 25

noting that any change ofy(t) from zero will take 2t0 seconds to propagate through
the delay in the feedback path so that if y(t) is zero for t < 0, then z(t) = 0 for
t < 2to. This is shown in Fig. 1.6-4. We now can use Eq. (1.6-2a) to determine y(t) in
the interval 0 < t < 2t0 because we now know x(t) and z(t) in that time interval.
Because z(t) = 0 in that time interval, we then have, as shown in Fig. 1.6-4, that
y(t) = x ( t ) in the interval 0 < t < 22,.
We now can use Eq. (1.6-2b) to determine z(t) in the interval 2t0 < t < 4t0.
Because y ( t ) = A in the interval 0 < t < to, we conclude that y(t - 2t0) = A when
0 < t - 2t0 < to or, equivalently, when 2t0 < t < 3t0 so that, as shown in Fig. 1.6-4,
z(t) = KA in that interval. Also, because y(t) = 0 for to < t < 2t0, we also have that
y(t - 2t0) = 0 when to < t - 2t0 < 2t0 or, equivalently, when 3t0 < t < 4t0 so that,
as shown in the figure, z(t) = 0 in that interval. Note that, in accordance with Eq.
(1.6-2b), z(t) is K times y(t) delayed by 2t0 seconds.
Now that z(t) and x ( t ) are known in the interval 2t0 < t < 4t0, we can use Eq.
(1.6-2a) to determine y(t) in that range. Then, once y(t) is known, Eq. (1.6-2b) can be
used as above to determine z(t) in the next 2t0 time interval. In this manner, y(t) can
be determined for all time as shown in Fig. 1.6-4 which is drawn for K M 0.7. Note
the echo sequence; each succeeding echo is K times the previous one so that the
echo pulses decay exponentially. The essence of echoing thus is contained in our
model.

Example 2 The system response to a rectangular pulse to seconds wide was


determined in the first example. In this example, the system response to a pulse 4t0
seconds wide as given by Eq. (1.6-4) will be determined for the case in which
K = -1.

x(t) = Ar - (1.6-4)

A K2
0 to 2t0 3to 4to 54, 6to 7to 810 9to f

Fig. 1.6-4 The waveforms for example 1.


26 GENERAL SYSTEM CONCEPTS

Because the system initially is at rest and the input is zero for t < 0, we have that
y(t) and z(t) are zero for t < 0. Using the same argument as in the previous example,
we then conclude that z(t) = 0 for t < 2t0. We now can determine y(t) in the interval
0 < t < 2t0 by using Eq. (1.6-2a) because x(t) and z(t) are known in that time
interval. As shown in Fig. 1.6-5, y(t) = x(t) in that time interval since z(t) = 0 there.
We now use Eq. (1.6-2b) to determine z(t) in the interval 2t0 -= t < 4t0. Because
y(t) = A in the interval 0 < t < 2t0, we conclude as in the last example that
z(t) = KA = -A in the interval 2t0 < t < 4t0. Consequently, we have from Eq.
(1.6-2a) that y(t) = A - A = 0 in that interval. Then, from Eq. (1.6-2b), we have
that z(t) = 0 in the interval 4t0 4 t < 6to. Thus, from Eq. (1.6-2a), we have that
y(t) = 0 in the interval 4t0 < t < 6to. Continuing in this manner, we arrive at the
waveforms shown in Fig. 1.6-5.
Note that for the system with K = - 1, the response to an input pulse that is 4t0
seconds wide is a pulse that is only 2t0 seconds wide. This is pulse compression. In
many applications such as radar, very narrow pulses are required. The principle
illustrated by this example is often used to obtain pulses that are more narrow
than can be achieved by standard electronic circuits.

The input in both examples is a rectangular pulse; the pulse width is the only
difference. Yet the system responses (with K = - 1) are significantly different output
waveforms. This occurs in systems with memory because the output at any instant
depends on the past values of the input waveform so that the output at each time
instant depends on the whole input waveform. This is not the case for no-memory
systems. As a result, the response to any given input is not obvious. Furthermore,
although we were able to determine the response for the two examples, the analysis
would become unwieldy if the system were not so simple. A better approach is
needed to analyze systems with memory. We begin the development of a better
approach in the next chapter.

I,Z
"
0
:I I: t

Fig. 1.6-5 The waveforms for example 2.


PROBLEMS 27

PROBLEMS

1-1 Each of the following is a description of a no-memory system with the input
x(t) and response y(t). For which does an inverse exist? If an inverse does not
exist, determine whether an inverse would exist if the amplitude of the
allowed inputs was constrained. If so, determine the amplitude range.
(a) Y A t ) = sin[x(t>l
(b) Y d t ) = tan[x(OI
(c) y,(t) = 2(t)- 3x(t) 2 +
(d) Y d ( t ) = U[x(t)l
(e) = w - 2i3
(t) Y/@)= ex(')
(8) Y&) = arctan[x(t>l
1-2 The input of the system shown below is the voltage, x(t), and the output is the
voltage, y(t). The two capacitors are initially uncharged. Are there conditions
on the time variation of the two capacitor values for which the system is time-
invariant?

1-3 Sketch each of the following functions for T = 3 and to = 0.5. Make certain
all critical values are labeled.
(a) f ( t >= (i
- I,r(y>

(c) h ( t ) = sin(5nt)r
(" it,>
~

1-4 Show that r


(3
- = u(t) - u(t - T ) .

1-5 Show that ulf(t)] = +sgnlf(t)l for any function,f(t).


2
1-6 Let T, be the fundamental period of a periodic waveform, s(t). Show that nT,
for n = 1 , 2 , 3 , 4 , .. . must be periods of s(t).

1-7 (a) Show that the waveform s(t) = C, E,"=, + +


C, cos(nw,t 6,) is peri-
odic. This expression is called a Fourier series representation of s(t).
(b) What is the fundamental period of s(t) in terms of the parameters C,, C,,
Q , , and 6,,?
28 GENERAL SYSTEM CONCEPTS

1-8 +
Let z(t) = x(t) y(t), where x(t) is a periodic waveform with a fundamental
period equal to T, and where y(t) is a periodic waveform with a fundamental
period equal to Ty. What are the conditions for which z(t) is periodic? For
those cases, determine its fundamental period, T,.
1-9 For each of the hnctions below, determine whether it is periodic and, if so,
determine its fundamental period.
+
(a) x,(t) = sin2nt 3 sin4nt
+
(b) xb(t) = sin t 3 sin 2t
+
(c) x,(t) = 2 sin t sin z/Zt
+
(d) xd(t) = 2 sin 6t 3 cos 7t
1-10 Sketch the input, x(t), and the output, y(t), of
(a) A half-wave rectifier
(b) A full-wave rectifier for the periodic input with a fimdamental period of
16 and for which

1-11 The transfer characteristic of a no-memory system is shown below.

The system input, x(t), is the triangular waveform shown below.

Draw a sketch of the system output, y(t). Label all important amplitudes and
times of y(t).
PROBLEMS 29

1-12 The transfer characteristic of a no-memory system is shown below.

Iy

The system input is x ( t ) = Acos(2xji), where A = 2 and f = 10Hz. Draw a


sketch of the system output, y(t), for 0 < t < looms. Label all important
amplitudes and times of y(t).

1-13 The transfer characteristic of a no-memory system is shown below.

lY

Draw a sketch of the system output, y(t), for -8 < t < 8 to the input, x(t),
shown below. Label all important amplitudes and times of y(t).

1-14
0 -8 -4 0 4 8 t

The transfer characteristic of a no-memory system is shown below.

I--
-17 -5 0 5 17 X

The system input is x ( t ) = A sin(2@), where A = 20 andf = 10 Hz. Draw a


sketch of the system output, y(t), for 0 < t < looms. Label all important
amplitudes and times of y(t).
30 GENERAL SYSTEM CONCEPTS

1-15 The transfer characteristic of a no-memory system is shown below.

Iy

The system input is x(t) = A cos(2njl), where A = 6 andf = 10Hz. Draw a


sketch of the system output, y(t), for 0 < t < looms. Label all important
amplitudes and times of y(t).

1-16 A square wave with a fundamental period equal to 12.5 ps and amplitude equal
to 3 is to be generated using a sinusoidal oscillator and a hard limiter. Draw a
block diagram of the system and specify the frequency of the oscillator.

1-17 It is desired that the output of a soft limiter with the sinusoidal input
x(t) =A sin(o,t+ 4) be a good approximation of a square wave. For this,
it is desired to design the system so that the soft-limiter output, y(t), will be as
shown in Fig. 1.4-10 with Iy(t)l = y o for at least 98 percent of the time.
Determine the required amplitude, A , of the input in terms of xo.

1-18 Let f ( t ) = e-*'u(t). Determine:


( 4 1, = J . f ( t ) dt
(b) Ib = j_",f(t> dt
(c) 1, = J-", If(t)I dt
(d) zd = Srrn f ( t 2 f ( - t ) dt

1-19 Let y(t) = JL,x(z) dz


Determine and sketch y (t ) for each of the following cases:
(a) x ( t ) = sin(oot)u(t)
(b) x(t) = cos(o,t)u(t)
(c) x ( t ) = e-"u(t) CL > 0
(d) x(t) = e-'l'l CL > 0

1-20 Determine and sketch the response, y(t), of the system shown below to the
input x(t) = u(t). Label all important amplitudes and times.
PROBLEMS 31

1-21 Determine and sketch the response, y(t), of the system shown below to the
input x(t) = r(t). Label all important amplitudes and times.

1-22 Determine the response, y(t), of the system shown below to the unit
rectangular input x(t) = r(t/2). Sketch and label all important amplitudes
and times. For credit, your work must be shown.

1-23 Determine the response, y(t), of the system shown below to the input
x(t) = 2r(t). Sketch and label all important amplitudes and times.

YQ),

t-2

1-24 Determine and sketch the response, y(t), of the system ...own below to the
unit step input x(t) = Q). Label all important amplitudes and times.

1-25 The feedback system shown below is initially at rest. Determine the system
response for x(t) = r(t). Sketch the system response y(t) and label all
important amplitudes and times.
32 GENERAL SYSTEM CONCEPTS

1-26 The feedback system shown in the first diagram is initially at rest, and the
input is x(t) = v ( t / 4 ) as shown in the second diagram.

z(t)p$+-p t-I

x(t) 1

0 4 t

Sketch the system response, y(t) and label all important amplitudes and times.

1-27 The feedback system shown in the first diagram is initially at rest, and the
input is x(t) = Ar(t/4) as shown in the second diagram. Determine and
sketch the waveforms y(t) and z(t). Label all important amplitudes and times.

1-28 The thermostatic control of the temperature in a room is an example of


feedback. For this example, the input is the desired temperature and the
output is the actual temperature at the thermostat. The difference is used to
control a heater in the room. Draw a simplified block diagram of the physical
system.
CHAPTER 2

LINEAR TIME-INVARIANT (LTI)


SYSTEMS

The basic reasons why systems must be classified was discussed in Section 1.2. This
motivated the classification of systems as being either time-invariant (TI) or time-
varying (TV). As discussed in Section 1.3, this text is concerned mainly with TI
systems. But even this classification is too general to be of much use. So we hrther
classified systems as being either no-memory or with memory. The analysis of no-
memory systems was then discussed in Section 1.4. Some structurally simple
systems with memory were discussed in Sections 1.5 and 1.6. There we saw that
the analysis of systems with memory could be rather complicated, especially for
ones which are structurally complex. Thus a more refined approach is needed for
such systems. A problem with the development of a more refined approach is that
even the class of systems with memory which are time-invariant is too broad. A
further classification is needed to develop the desired refined approach. The further
classification that has been found useful is to classify systems as being either linear
or nonlinear. A nonlinear system is simply a system that is not linear so that we need
to just define a linear system.

2.1 LINEAR SYSTEMS

Consider a system with the response y ( t ) to the input x(t) as shown in Fig. 2.1- 1. r f ;
for the given system, an input x l ( t )produces the output y l ( t ) and an input x2(t)
produces the output y2(t), then the system is a linear system if the input x(t) =
+ +
Clxl(t) C,x,(t) produces the output y ( t ) = C , y l ( t ) C g 2 ( t ) in which C , and
C2 are arbitrary constants. For the system to be linear, this condition must be
true for any two inputs, x l ( t ) and x2(t), and any two constants, C , and C,. Note
33
34 LINEAR TIME-INVARIANT (LTI) SYSTEMS

Fig. 2.1-1 Depiction of a system.

that the constants need not be real; they can be any two complex numbers. This
definition of a linear system derives from the exactly parallel mathematical definition
of a linear mapping.
A shorthand notation that we shall use is x ( t ) -+ ~ ( t )which
, means that the input
x(t) produces the output ~ ( t )Using
. this shorthand notation, the definition of linearity
can be expressed as follows: If, for any inputs, x l ( t ) and x2(t),

then

where C, and C, are arbitrary complex constants. In words, a linear system is one
for which the response to a linear combination of inputs is the same linear combina-
tion of the responses to each input individually.
As a simple illustration, a square-law device is a system for whichy(t) = ??(t). To
apply the definition, we let the system input be

The corresponding system response then is

(2.1-2b)

The square-law device thus is not a linear system because

The right-hand side of Eq. (2.1-2c) is, according to the linearity definition, the
required output for it to be a linear system. To obtain a deeper understanding of
the linearity definition, we consider some special cases of the definition.

Special Case 1 First consider the special case for which C2 = 0. For this case, the
linearity definition states the following:
If

(2.1-3a)
2.1 LINEAR SYSTEMS 35

then

In words, this states that if the input amplitude of a linear system is changed by a
factor of C, but the shape of the input waveform remains unchanged, then the shape
of the corresponding output waveform remains unchanged and only its amplitude is
changed by the same factor of C , . This property is called the homogeneous property.
Thus, every linear system is homogeneous. The converse, however, is not true. That
is, a system can be homogeneous and not be linear. For example, consider the system
for which the response y(t) to the input x(t) is

(2.1-4)

This system is homogeneous, but it does not satisfy the linearity definition given by
Eqs. (2.1-1).
For the special case in which both C, and C2 are zero, we have from Eqs. (2.1-1)
that if x(t) = 0, then y ( t ) = 0. This can be written in our shorthand notation as
0 + 0. Note that because, for a linear system, the output waveform amplitude
(but not the shape) changes as the input waveform amplitude (but not the shape)
changes, a plot of the maximum output amplitude versus the maximum input ampli-
tude can be made. From Eqs. (2.1-3), this plot would be a straight line that passes
through the origin.

Special Case 2 Now consider the special case for which C, = C2 = 1. For this
special case, Eqs. (2.1-1) state the following:
If

then

That is, the response of a linear system to a sum of inputs is the sum of the responses
to each individual input. This is called superposition. Thus, every linear system
satisfies superposition. Note that Eqs. (2.1-1) are a generalized form of Eqs.
(2.1-5). For this reason a linear system can be defined as a system that satisfies
generalized superposition.
We showed above that if a system is homogeneous, it is not necessarily linear. But
what of superposition? It can be shown that a system can satisfy superposition and
not be linear. However, a system that satisfies superposition is very close to being
linear, and thus examples of systems that satisfy superposition and are not linear are
rather contrived. The reason is that Eqs. (2.1-5) imply Eqs. (2.1-1) for the case in
36 LINEAR TIME-INVARIANT (LTI) SYSTEMS

which the constants C, and C, are rational constants. Thus the counterexample
would be one where Eqs. (2.1-1) are satisfied when C, and C2 are rational constants
but not when C, and C2 are irrational constants. One would not expect to encounter
a physical system with this property. Thus, if a physical system satisfies superposi-
tion, there is a very good chance (but not a certainty) that it also satisfies generalized
superposition and thus is linear. However, a system definitely is linear if it is
homogeneous and also satisfies superposition. Thus, to show that a system is
linear, one must show either that it satisfies generalized superposition [Eqs.
(2.1-l)] or that it is homogeneous [satisfies Eqs. (2.1-3)] and also satisfies super-
position [Eqs. (2.1-5)].
To illustrate our discussion, consider the system shown in Fig. 2.1-2. The system
is a circuit with the input being the voltage x(t) and the output being the voltage y(t).
For the given system,

(2.1-6)

This system does not satisfy generalized superposition [Eqs. (2.1-l)] and thus is not
linear. Of course, it is not necessary to check Eqs. (2.1-1) for this system to deter-
mine whether it is linear because it is easily seen that the system is not homogeneous
since, from Eq. (2.1-6), y(t) = ~ Ra E when x ( t ) = 0 so that y(t) # 0 when
R, + R A
x ( t ) = 0. Note that the system of k g . i.1-2 is a linear circuit, but it not a linear
system. In circuit theory, one is concerned with the study of the interactions between
the elements of which the circuit is composed; in system theory, however, one is
concerned with the study of the mapping of inputs to outputs. The system of Fig.
2.1-2 is a linear circuit because it is composed of linear elements, while it is not a
linear system because the mapping of inputs to outputs does not satisfy generalized
superposition. Care must be taken not to confuse circuit theory and system theory
because the different concerns leads to significant differences of the theories devel-
oped for their study.
It should be noted that no physical system is truly linear. The parameter values of
any physical system will change for sufficiently large values of the input amplitude.
For example, the resistor values in the system of Fig. 2.1-2 will change due to
overheating if the current is sufficiently large. However, the resistor values will
change negligibly for some range of current values so that the system can be

Fig. 2.1-2 System for the example.


2.2 LINEAR TIME-INVARIANT (LTI) SYSTEMS 37

modeled as a linear system if this range encompasses the range of current values of
interest. We similarly model many physical systems as being linear if they satisfy
generalized superposition for input amplitudes over the range of interest. Also note
that no physical system is truly time-invariant because physical components do age.
However, if the system mapping of inputs to outputs doesn't change measurably over
the time interval of interest, then a good model is a time-invariant one.

Again, as mentioned in Section 1.3, in making a model of a physical system, we


are not trying to represent it exactly because the best model of any physical system is
the physical system itself: Rather, the attempt is to make as simple a model as
possible for which the difference between its output and that of the physical
system being modeled is acceptably small for the class of inputs of interest. The
model then serves as a basis not only for calculating responses of the physical
system, but also, because of its simpler structure, to gain a deeper understanding
of the physical system behavior.

2.2 LINEAR TIME-INVARIANT (LTI) SYSTEMS

The class of systems with which we are mainly concerned in this text is the class of
systems that are both linear and time-invariant. The theory of linear time-invariant
(LTI) systems which we shall develop is of central importance in system theory
because many physical systems can be accurately modeled as LTI systems. Also, the
theory of LTI systems forms the basis of the theory for linear time-varying (LTV)
systems and also for some important classes of nonlinear systems.'
Let the input of an LTI system be composed of a linear combination of waveform
as given by Eq. (2.2-la)

(2.2- la)

Then, because the system is linear, it satisfies generalized superposition so that the
system response is

(2.2-1b)
n

in which w,(t) is the system response to the input u,(t). Now, if each input waveform
is a translation of a particular waveform so that u,(t) = u(t - zn), then because the
system is TI, wn(t)= w(t - 7,) where u(t) -+ w(t). We thus have that if the input of

'This extension to nonlinear systems is developed in M. Schetzen, The blterra and Mener Theories of
Nonlinear Systems, John Wiley & Sons, 1980, updated and reprinted by Krieger Pub. Co., 1989.
38 LINEAR TIME-INVARIANT (LTI) SYSTEMS

an LTI system is composed of a linear combination of a particular waveform and


translates of that waveform as given by

x(t) = Cnu(t- 7,) (2.2-2a)


n

then the response of the LTI system is

(2.2-2b)

where w(t) is the LTI system response to u(t). This result states that if we know the
system response of an LTI system to a particular input, then as given by Eqs. (2.2-2),
we can determine the system response to any input which can be expressed as a
linear combination of the particular input and translates of that waveform. Equations
(2.2-2) are a statement of a fundamentalproperty of LTI systems. This fundamental
property is the basis of the theory we shall develop for the analysis of LTI systems.
However, we consider some illustrative examples first to gain a better appreciation of
the implications of this property.
Assume that an experiment was performed on a particular LTI system from which
it was observed that the response to the input

is the output

y1(t) = tr(t) + (2 - t)r(t - 1) (2.2-3b)

as shown in Fig 2.2-1. The system response to a wide variety of input waveforms can
be determined from this one experimental result. For example, the response to

(2.2-4a)

which is shown in Fig. 2.2-2, can be determined by noting that we can express x2(t)
as

XAt) = r(t) + r(t - 1) = q ( t ) + X,(t - 1) (2.2-4b)

Thus, in accordance with Eqs. (2.2-2), the response of the LTI system to the input
x2(t)is
2.2 LINEAR TIME-INVARIANT (LTI) SYSTEMS 39

Fig. 2.2-1 The observed input and output.

With the use of Eq. (2.2-3b), this can be expressed in the form

+
y2(t) = tr(t) r(t - 1) + (3 - t)r(t - 2) (2.2-4d)

A graph ofy2(t) is shown in Fig. 2.2-2. This result should be verified graphically by
using Fig. 2.2-1.
Now consider the input shown in Fig. 2.2-3 which can be expressed as

x3(t) = r(t) - r(t - 1) (2.2-Sa)

In accordance with Eqs. (2.2-2) and (2.2-3), the system response to this input is

With the use of Eq. (2.2-3b) this can be expressed as

y3(t)= tr(t) + [3 - 2t]r(t - 1) - [3 - t]r(t- 2) (2.2-SC)

A graph of x3(t)andy3(t) is shown in Fig. 2.2-3. This result also should be verified
graphically with the use of Fig. 2.2-1.
As a third example, consider the input shown in Fig. 2.2-4, which is

X&) = r(t) - r(t - 4) (2.2-6a)


= XI ( t ) - x , (t - 4)
Again, in accordance with Eqs. (2.2-2) the system response to this input is

Fig. 2.2-2 The input x 2 ( t ) and output y2(t).


40 LINEAR TIME-INVARIANT (LTI) SYSTEMS

Fig. 2.2-3 The input x,(t) and output y 3(t).

and substituting the expression from Eq. (2.2-3b), this can be expressed as

y4(t) = tr(t) - [t - +]r(t- i)


+ [2 - t]r(t - 1) - [$- t]r(t - 3) (2.2-6~)

A graph of the input and corresponding output is shown in Fig 2.2-4. Note in Eq.
i)
(2.2-6a) that the two waveforms x 1( t ) and x , ( t - overlap in the expression for x4(t).
Even so, the result expressed by Eqs. (2.2-2) can be applied as in this example.
As a last example, the system response to the input x,(t) = r(2t) will be deter-
mined. For this determination, first note that we can express this input in the form

X&) = r(t) - r(t - i)+ r(t - 1) - r(t - ); + . . .


5(-l)nr(t
= n=O - 5) (2.2-7a)

= E(-l)nX1
n=O
(t - );
You should verify this result graphically. Consequently, in accordance with Eqs.
(2.2-2), the system response is

A graph of ys(t) can be obtained by substituting Eq. (2.2-3b) in Eq. (2.2-7b) and
manipulating the resulting messy expression. A better procedure is to determine the

Fig. 2.2-4 The input q ( t ) and output y4(t).


2.3 THE CONVOLUTION INTEGRAL 41

waveform by plotting each term of Eq. (2.2-7b) and graphically adding the straight
lines in each interval. This procedure is particularly simple in this example because
the sum of straight lines is a straight line, and a straight line is determined by just
two points on the line. Clearly, the two points to choose are the ends of each time
segment. Before launching into a calculation, it is good practice to examine the
various available procedures and choose the one that is simplest and lends insight
to the solution. The result for this example is shown in Fig. 2.2-5.

2.3 THE CONVOLUTION INTEGRAL

The basic result used for the development of a general characterization of an LTI
system is given by Eqs. (2.2-2), which was derived and discussed in Section 2.2.
From that result, we saw that the response of a given LTI system to a large class of
inputs can be determined from knowledge of the system response, w(t), to one
particular input, u(t). The class of inputs is that class which can be expressed as a
linear combination of translates of u(t) as given by Eq. (2.2-2a). The input, u(t), and
corresponding system response, w(t), are thus said to be a characterization of the
given LTI system for that class of inputs. A difficulty with the characterization as
discussed in Section 2.2 is that it often is difficult to determine the expression of a
given input, x(t), in the form given by Eq. (2.2-2a). Without such an expression, the
system response, y(t), to the input, x(t), cannot be determined. What is needed is to
choose one particular input, uo(t) for which the expression of any given input, x(t), in
the form of Eq. (2.2-2a) is easily obtained. With such a choice, the response wo(t)of
a given LTI system to the particular input uo(t) would be a general characterization of
the given system because we then could obtain a general expression for y(t) in terms
of x ( t ) and the response wo(t).
To determine a good choice for u,(t), consider a segment of some arbitrary
waveform for x ( t ) and note that it can be approximated by a piecewise-constant
curve, x,(t), as shown in Fig. 2.3-1. The approximation x,(t) can be considered as
a sequence of contiguous rectangles as indicated by the dashed lines. The width of
each rectangle is I: seconds. As shown, the rectangle midpoints are at t = kE for
k = 0, f l , f 2 , 4 3 , . . . . The height of the rectangle with its midpoint at t = nE is
chosen to be ~ ( I z E ) . Thus the height of the rectangle with its center at t = 0 is x(O),
the height of the rectangle to its right is x(E), and the height of the rectangle to its left
is x(--E).

Fig. 2.2-5 The input x s ( t ) and output ys(t).


42 LINEAR TIME-INVARIANT (LTI) SYSTEMS

: ‘
/-&
. I, ’ .
-6 0 E iE 3’E 4E t

Fig. 2.3-1 Illustrating the step approximation of a waveform.

Note that the difference between x ( t ) and x,(t) can be made as small as desired by
choosing E sufficiently small. This can be expressed mathematically as

x(t) = limx,(t) (2.3-1)


&+O

To obtain the system response y(t) to the input x(t), we shall obtainy,(t), which is the
system response to x,(t) and then let E + 0. We then shall assume that

(2.3-2)

That is, we shall assume in our development that, for any input x ( t ) , the difference,
y(t) - y,(t), goes to zero as the input approximation error, x ( t ) - x,(t), goes to zero.
Systems for which this is true are called continuous systems. Thus our assumption is
that the LTI system is continuous. The continuity of an LTI system is closely allied
with its stability, and so we shall postpone a discussion of LTI system continuity and
its implications until Section 3.7 after our discussion of LTI system stability.
A function that is useful for our development is d,(t) which is shown in Fig.
2.3-2. As shown, it is a rectangle with its midpoint at t = 0, a width of E seconds, and
its area is equal to one. In terms of this function, a rectangle with its center at t = nE
and height equal to + E ) in Fig. 2.3-1 can be expressed as wc(ns)d,(t - n~).The step
approximation of x(t) is the sum of all these rectangles so that it can be expressed as

00
x&) = m(ns)d,(t - ne) (2.3-3a)
m=-m

Fig. 2.3-2 Graph of 6,(t).


2.3 THE CONVOLUTION INTEGRAL 43

Note that Eq. (2.3-3a) is exactly in the form of Eq. (2.2-2a) with C, = m(nE),
v(t) = s,(t), and z, = nE. Thus, in accordance with Eq. (2.2-2b), the LTI system
response to x,(t) is

where h,(t) is the response of the given LTI system to S,(t). We now let E + 0 to
obtain y(t) in accordance with our discussion above. In the limit E + 0, Eq. (2.3-3b)
becomes

y(t)= 1 00

--oo
x(o)h(t - 0)d o (2.3-4a)

where

h(t) = limh,(t) (2.3-4b)


c+o

The integral in Eq. (2.3-4a) is called the convolution integral. Before discussing the
convolution integral, let us examine the limiting process to see that Eq. (2.3-4a) is
the correct limit of Eq. (2.3-3b).
First note that lim,+o does not mean that E becomes zero; rather, it means that E
becomes arbitrarily small. For example, consider the unit step function, u(tj, defined
by Eq. (1.4-6). Note that u(0) = 1/2. However, if E is nonnegative, then
lim6+oU ( E ) = 1 because no matter how small is E (but not zero), the value of U ( E )
is one. The value one is called the right-hand limit of the unit step function at zero
since it is the value of the limit as the point zero is approached from the right. A
shorthand expression for this right-hand limit is u(O+) = 1. Also note that
lim,+o u(-E) = 0 because no matter how small E is (but not zero), the value of
u(-E) is zero. Zero is called the left-hand limit of the unit step function at zero
and can be expressed in shorthand notation as u(0-) = 0. To further illustrate a
limit, consider the expression, lim,+o 1 / = ~ 00. We do not mean by E + 0 that there
is a number such that the reciprocal of that number is infinite (remember that
division by zero is not an allowed operation).’ It just means that as E becomes
arbitrarily small, the reciprocal 1 / ~becomes arbitrarily large so that 1 / ~ can be
made as large as desired by making E sufficiently small. In summary, lim,+o just
means the limit as E becomes arbitrarily small and not the value when E has the value
of zero.
Recall that h,(t) is the LTI system response to S,(t). Thus in accordance with Eq.
(2.3-4b), h(t) is the limit as E + 0 of the LTI system response to a rectangular pulse

* If division by zero were allowed, then we could show that any two numbers are equal. For example, since
1 . 0 = 2 . 0 we could divide both sides of the equation by zero to obtain 1 = 2. Division by zero is the
basis of many mathematical puzzles; the cleverness of those puzzles is in the method by which the division
by zero is concealed.
44 LINEAR TIME-INVARIANT (LTI) SYSTEMS

d,(t) shown in Fig. 2.3-2. Physically, as E + 0, h,(t) keeps changing because the LTI
system response is different for different values of the rectangular width, E . However,
as E + 0, h,(t) approaches some waveform that we call h(t). A useful shorthand
expression is to say that h(t) is the LTI system response to a rectangular pulse with a
width E = O+. Note that h(t) is not the LTI system response to a rectangular pulse
with zero width and area equal to one because such a pulse is mathematically
meaningless. Rather, as discussed in the next section, it is the LTI system response
to a rectangular pulse with infinitesimal width and area equal to one.
We denote a pulse with an infinitesimal width, E = 0+, and area equal to one by
d(t) and call it the unit i m p u l ~ eThe
. ~ adjective “unit” refers to the fact that the area
of d(t) is one and ‘‘impulse’’refers to the fact its width is E = O+. The unit impulse,
d(t), is represented as shown in Fig. 2.3-3. The area of the impulse is indicated by a
value in the parentheses next to the arrowhead (which is one in this case). Note that
the impulse, &t), only has meaning in terms of a limit as E + 0. It is important to
keep this in mind in all your applications of the impulse. This is discussed in more
detail in Section 2.4. In terms of this definition of the unit impulse, we say that h(t) is
the LTI system response to d(t). We thus call h(t) the system unit-impulse response.
Again note that this means that h(t) is the limit as E + 0 of the LTI system response
to d,(t).4
It now is easy to see that the integral, Eq. (2.3-4a), is the correct limit of the
summation, Eq. (2.3-3b). To see this, first consider an integral of the form

(2.3-5)

Its value can be determined graphically for any desired value of t, say t = to, by
plottingf(o, to) versus a and determining the area under the curve in the interval
from CI to b. An approximate value of the area can be determined by approximating
f ( o , to)as depicted in Fig. 2.3-4. The area of the approximation shown is the sum the
areas of the rectangles, which is

(2.3-6)

0 t
Fig. 2.3-3 Depiction of the unit-impulse, 6(t).

Although we shall use s(t) for the unit impulse function in this text in keeping with the fashion of the
day, another notation used is u,(t).
Some problems that arise by considering the impulse width to be zero are discussed and illustrated in
Section 10.5.
2.4 THE UNIT-IMPULSE SIFTING PROPERTY 45

Fig. 2.3-4 Illustration for approximating an integral.

As E -+ 0, the number of rectangles between a and /J increases (note that the


number is inversely proportional to E ) and the area under the step approximation of
f(a, to)approaches that of the integral. Now, Eqs. (2.3-3b) and (2.3-4a) are, respec-
tively, identical to Eqs. (2.3-6) and (2.3-5), in which the function being integrated is
f ( o , t ) = x(o)h(t - o) and the limits are a = --co and fi = 00. Thus y,(to) -+ y(to)
as E -+ 0. Consequently, the convolution integral, Eq. (2.3-4a), is the correct limit as
E + 0. It is important to keep in mind that the convolution integral was derived
using the fundamental property of LTI systems given by Eqs. (2.2-2). Thus the
convolution relation given by Eq. (2.3-4a) is not valid for systems that are not
linear and/or not time-invariant.
From our derivation, we note that y(t) = h(t) when x ( t ) = &t). Note in this
statement that t = 0 is the center of the unit impulse so that t = 0 for h(t) is the
instant the unit impulse is applied. Once the unit-impulse response, h(t), of a given
LTI system is known, the system response, y(t), to any input, x(t), can be determined
from the convolution integral, Eq. (2.3-4a). Thus, the convolution integral comple-
tely characterizes the mapping of inputs to outputs of an LTI system. Also, the
specific mapping of a particular LTI system is determined by its unit-impulse
response. Consequently, as we shall see, all properties of a particular LTI system
such as causality and stability are determined by its unit-impulse response. It is for
these reasons that the convolution integral and the unit impulse response are of
central importance in the theory of LTI systems. However, before examining the
convolution integral and its use, we shall examine the unit impulse in more detail.

2.4 THE UNIT-IMPULSE SIFTING PROPERTY

To better understand the unit impulse, consider the limit as E + 0 of Eq. (2.3-3a). In
accordance with Eq. (2.3-l), the limit of the left-hand side is x(t). Thus, using our
discussion in Section 2.3, the limit as E -+ 0 of Eq. (2.3-3a) is

00

(2.4-1)
46 LINEAR TIME-INVARIANT (LTI) SYSTEMS

Note that this result states that the convolution of any function with a unit impulse is
equal to that function. We will verify the validity of this result by actually performing
the integration using the concepts discussed in Section 2.3.
Now as discussed in Section 2.3, the width of S(t) is 0+, which means that Eq.
(2.4-1) is just a shorthand notation for the limit

(2.4-2)

We thus must perform the integration given in Eq. (2.4-2) and then let E -+ 0 to
obtain the value of the integral in Eq. (2.4-1). For this, first note that Eq. (2.4-2)
states that for any given value oft, say t = to, x(to) is the limit as E + 0 of the area
under the product of x(a) and S,(to - a).
To illustrate the integration procedure, consider the waveform x(t) to be that
shown in Fig. 2.3-1. Then the graph of x(a) versus a is as shown in Fig. 2.4-1.
Note that the graph of x(a) versus a in Fig. 2.4-1 is identical with the graph of x(t)
versus t in Fig. 2.3-1. The reason, as discussed in Section 1.5, is that a function is
simply a rule by which the value of the number in the parentheses is mapped into a
value. Thus, in accordance with our discussion in Chapter 1, a function is a many-to-
one mapping of numerical values into numerical values. For our present example,
x(t) is the value obtained by applying the rule-which is denoted by x(.)-to the
number t. The graph of x(t) versus t in Fig. 2.3-1 is simply a graph of this rule.
Consequently, if a. = to,then x(ao) = x(to) so that the graph of x(a) is as shown in
Fig. 2.4-1.
We now require a graph of S,(to - a). From Fig. 2.3-2, this is a rectangle with a
width of E and with its midpoint at the value of a for which to - a = 0. Because we
shall be taking the limit as E + 0, we shall consider E to be infinitesimal so that
S,(to - a) is an infinitesimally wide rectangle with its midpoint at a = to. A graph of
S,(to - a) thus is as shown in Fig. 2.4-2.
Using Figs. 2.4-1 and 2.4-2, the graph of the product x(o)S,(to - a) drawn for
to > 0 is as shown in Fig. 2.4-3. Now, if x(t) is differentiable about t = to,the graph
of x(a) is a straight line in the &-wideregion about a = to. The reason is that E is
infinitesimal and the first approximation of a differentiable h c t i o n x(a) is a straight
line with a slope equal to the derivative of the function at the point a = to. Conse-
quently, the graph of the product x(o)S,(to - a) is a trapezoid as shown in Fig. 2.4-3.

Fig. 2.4-1 Illustration for the evaluation of Eq. 2.4-1.


2.4 THE UNIT-IMPULSE SIFTING PROPERTY 47

Q
t0
Fig. 2.4-2 Graph of S,(to - 0).

The value of an integral is the area under the graph of the function being integrated.
For our case, the integral in Eq. (2.4-2) is just the area of the trapezoid in Fig. 2.4-3
which is equal to the trapezoidal width, E , times its height at its midpoint, x(to)/&.
Thus the value of the integral in Eq. (2.4-2) is x(to). Because this value does not
change as E becomes smaller, we note that the limit as E + 0 of the integral in Eq.
(2.4-2) is x(to). This is in accordance with the result given by Eq. (2.4-l), which is
the shorthand notation for Eq. (2.4-2).
Our evaluation of Eq. (2.4-1) was for the case in which the function x(t) is
differentiable at t = to. What is the result if x ( t ) is discontinuous at t = to? For
this, consider x ( t ) to be discontinuous at t = to with a left-hand limit at to equal
to A and a right-hand limit at to equal to B. As in Section 2.3, we express this as
x(to-) = A and x(to+) = B. Using the procedure discussed above, the graph of
x(a)d,(to - c) is as shown in Fig. 2.4-4. As in Fig. 2.4-3, the width of the figure,
which is infinitesimal, has been enlarged greatly for ease of viewing. Again, because
x ( t ) is differentiable in the infinitesimal interval to the right and also to the left of
CJ = to, the graph is a straight line in each of these regions as shown. However, note
that the straight lines do not necessarily have the same slope because the derivative
of x ( t ) to the left and right of the discontinuity is not necessarily the same. The area
under the curve of Fig. 2.4-4 is the value of the integral in Eq. (2.4-2). The area
under the curve is the sum of the areas of the two trapezoids, which is seen to be

[(
equal to - x to - -
2 2+ ( + 21
x to - . We obtain the value of the integral in Eq.
(2.4-1) by taking the limit as E + 0. Thus the value of the integral in Eq. (2.4-1) is

(2.4-3)

We thus note that, at a discontinuity, the value of the convolution of a function with a
unit impulse is equal the average of the left-and right-hand limits of the function at

Fig. 2.4-3 Graph of x(a)6,(to - 0 ) .


48 LINEAR TIME-INVARIANT (LTI) SYSTEMS

.........................................
B l s ............

to-; t0 to+; 0

Fig. 2.44 Graph of x(o)b(to- a) for the case in which x(t) is discontinuous at t = to.

the discontinuity. At a discontinuity, we shall dejine the value of the function to be


the average of the left-and right-hand limits because then Eq. (2.4-1) is satisjied for
all values o f t . For example, if x(t) = u(t),then the value of the convolution integral
in Eq. (2.4-1) at t = 0 equals i[O + i.
13 = In my definition of the unit step, Eq.
(1.4-6), the reason I defined u(0) = 1/2 is that Eq. (2.4-1) is satisfied for all values of
t . The technique described above is the proper method that should be used to
evaluate integrals involving impulses because, as discussed in Section 2.3, the
impulse is defined only in terms of E -+ 0.
Observe from Eq. (2.4-1) that the integral of any function times an impulse is
equal to the value of the fhnction evaluated at the location of the impulse. For our
case, the function being integrated is X ( G ) and the impulse is located at o = t so that
the value of the integral is x(t). This result is called the siftingproperty of the impulse
because it can be used to “sift-out” the value of a function at any desired location as
in Eq. (2.4-1). The sifting property of the impulse is an important property that will
be used at several places in this text.’

2.5 CONVOLUTION

The response, y(t), to the input, x(t), of an LTI system was shown in Section 2.3 to
be given by the convolution integral, Eq. (2.3-4a):

00

y(t)= x(o)h(t- a) d o (2.5-1)


--03

where h(t) is the LTI system unit-impulse response. Note that the output for any
given input can be determined from Eq. (2.5-1) once the unit-impulse response, h(t),

A formal mathematical theory of a class of symbolic functions that have the same properties as those
derived above for the unit impulse has been developed by the French mathematician Laurent Schwartz and
published by him in Thiorie des Distributions, Vols. 1 and 2, Actualitk Scientifique et Industrielles,
Hermann & Cie., Paris, 1950 and 1951. I mention this because some texts on system theory include a
short outline of the Schwartz theory of distributions. This is my only mention of it because it really is not
needed for a discussion of the unit impulse and its properties. My discussion in this section is
mathematically accurate and, because it is physically based, it also lends a better physical understanding
for system theory than the symbolic theory of Schwartz.
2.5 CONVOLUTION 49

of a given LTI system is known. In this sense, the unit-impulse response of a given
LTI system completely characterizes the system mapping of inputs into outputs. For
this reason, the unit-impulse response will play a central role in our discussion of LTI
system theory. However, before beginning this discussion, we shall evaluate the
convolution integral for some cases in order to illustrate some techniques that can
be used for its evaluation and to gain some insight for interpreting the integral.

Example 1 For our first example, consider an LTI system with the unit-impulse
response

h , ( t ) = A6(t - t o ) (2.5-2)

From Eq. (2.5-l), its response, y(t), to an input, x(t), is

y(t) = 1
00

-m
x(a)AG(t - to - CJ)da (2.5-3)

Using the sifting property of the unit impulse developed in the last section, the value
of the integral is equal to the value of the function multiplying the unit impulse
evaluated at the location of the unit impulse. The function multiplying the unit
impulse is Ax(o). The unit impulse is located at the value oft^ for which its argument
is zero; this is the value of CJ for which t - to - CJ = 0 so that the unit impulse is
located at CJ = t - to. Thus we obtain

y(t) = Ax(t - to) (2.5-4)

We observe that the output of an LTI system with a unit-impulse response given by
Eq. (2.5-2) is its input multiplied by a factor of A and delayed by an amount of to
seconds. Using the block diagram representation developed in Section 1.5, this
system can be represented as shown in Fig. 2.5-1. Note that the system is simply
an ideal amplifier with a gain of A in tandem with a delay of to seconds. It is called
an ideal amplifier because the input waveform is amplified without any distortion.

Example 2 For our second example, consider an LTI system with the unit-impulse
response

h*(t) = Ku(t) (2.5-5)

(t - to)

Fig. 2.5-1 Block diagram representation of the system with the unit-impulse response h , ( t ) .
50 LINEAR TIME-INVARIANT (LTI) SYSTEMS

Using the convolution integral, Eq. (2.5-l), the response, y(t), of the given system to
an input, x(t), is

y(t) = / 00

-m
x(o)Ku(t - a) do (2.5-6)

In accordance with the definition of the unit step, Eq. (1.4-6), it has the value zero
when its argument is negative and has the value one when its argument is positive.
Thus

0 for a > t
u(t - a) = (2.5-7)
1 foro < t

Consequently, the function being integrated is zero for a > t so that the integral can
be expressed as

y(t) = f
--oo
x(a)K do
(2.5-8)

The value of the unit step at a = t is 1/2. However, it has no effect on the value
of the integral because the area under a point is zero. Equations (2.5-8) and (15 1 )
are identical. Thus, a block diagram of this system is as shown in Fig. 2.5-2, which is
the same as Fig. 1.5-2. Note that the system is simply an ideal integrator with a gain
of K .

Example 3 For our third example, consider an LTI system with the unit-impulse
response

K forO<t<T (2.5-9)
0 fort<Oort>T

In this equation, r(t) is the unit-rectangle function defined in Section 1.5. To deter-
mine the system response to an input, x(t), we again use the convolution integral, Eq.
(2.5-1). For this we require h3(t - a) as a function of a for a given value of t. From
Eq. (2.5-9) above we have that h3(t) = K only when 0 < t < T. Consequently, from
our discussion of a function as a mapping, we have that h3(t - a) = K only when

Fig. 2.5-2 Block diagram representation of the system with the unit-impulse response h2(t).
2.5 CONVOLUTION 51

0 < t - o < T . Solving for the range of o, we have that h,(t - o) = K only when
t - T < o < t. Keep in mind that t is a constant in the integration so that we are
determining the output, y(t), for a particular value oft. Thus

0 foroct-T
K for t - T < o < t (2.5- 10)
0 foro > T

Now, from the convolution integral, Eq. (2.5-l), for a given value oft, y(t) is equal to
the area under the curve x(o)h(t - 6).For our case, this product is

for o < t - T
for t - T < 0 < t (2.5-1 1)
for o > t

so that

(2.5- 12)

We note for this case that the LTI system response at the time t is K times the integral
of the input over the previous T seconds.
To illustrate the evaluation of this integral, consider the case in which the input is
the rectangular pulse x(t) = Ar(t/T):

y(t) = KA fI-T
r(:) do (2.5- 13)

To perform this integration, it is best to draw a sketch such as the one shown in Fig.
2.5-3. With such a sketch, the value of the integral for each value o f t can easily be
determined. For example, if t < 0, then the integral is over only negative values of 0.
The value of the integral is zero because, from the sketch, r ( o / T )= 0 for o < 0.
Thus y(t) = 0 for t < 0. Now consider the value o f t to be in the range 0 < t < T .
For t in this range, note that t - T < 0 and t > 0. Thus the value of the integral is the
shaded area shown in Fig. 2.5-4a. This area is equal to t so that y(t) = KAt for
0 < t < T . Now consider the value o f t to be in the range T < t < 2T. For t in this
range, note that 0 < t - T < T and t > T . Thus the value of the integral is the
shaded area shown in Fig. 2.5-4b. This area is T - (t - T) = 2T - t so that

40 17)
1

lJ
0 T
Fig. 2.5-3 Graph of Y(cT/T).
52 LINEAR TIME-INVARIANT (LTI) SYSTEMS

Fig. 2.5-413 Relative to the integration for T < t < 2T.

y(t) = KA(2T - t) for T < t < 2T. Lastly, consider the value o f t to be greater than
2T. For t in this range, note that t - T > T so that the integral is over values of (T
larger than T . The value of the integral is zero because, from the sketch, Y((T/T)= 0
for (T > T. Thus y(t) = 0 for t > 2T. Combining the expressions for y(t) in the
various ranges, we have

for t < 0
forO<t<T
(2.5- 14)
t) for T < t < 2T
fort > 2T

A graph of the output, y(t), for the example is shown in Fig. 2.5-5.

Example 4 For our fourth example, consider an LTI system with the unit-impulse
response

h4(t) = e-*'u(t), where c1 > 0 (2.5-15)

We shall determine the LTI system response, y(t), to the input

x(t) = ePru(t), where c1 > 0 (2.5-16)

Fig. 2.5-5 Graph ofy(t) for example 3.


2.5 CONVOLUTION 53

0 t
Fig. 2.5-6 Graph of x(t) and h4(t) for example 4.

For this example, I have chosen x(t) = h4(t). A graph of this function is shown in
Fig. 2.5-6.
As in the previous examples, the system response, y(t), will be determined using
the convolution integral, Eq. (2.5-1). The variable of integration is a so that t is a
constant in the integral. Thus the value of the integral, y(t), is determined by first
choosing ranges of values of t to use for the determination of y(t). The ranges of
values of t to choose is determined, as was done in Example 3, by examining a
sketch of x(a)h4(t- a) for various values of t . For this, a sketch of h4(t - 0 ) is
required.
We first note from Fig. 2.5-6 that h4(t) = 0 for t < 0. Thus, in accordance with
our discussion of a function as a rule, we have that h4(t - a) = 0 when the value
of the argument ( t - a) < 0. For a given value o f t , this is for a > t as seen in Fig.
2.5-7. To determine the graph for a < t, we evaluate h4(t - a) for a = t - a, with
a > 0. This point on the a axis is shown in Fig. 2.5-7. For this value of a we have
that ( t - a) = [t - ( t - a)] = a so that h4(t - a) = h4(a). From Eq. (2.5-15), this is
equal to e-'' as seen in Fig. 2.5-7. Note that the graph of h4(t - a) versus a can be
obtained by first folding the graph of h4(t) about t = 0 and placing the origin at the
point t on the a axis. It can be seen that for any h(t) the graph of h(t - a) versus a is
obtained by folding and shifting as described above. For this reason, mathematicians
often use the descriptive German word Faltung for convolution because Faltung in
German means a folding over. Now, from the convolution integral, Eq. (2.5-l), for a
given value oft, y ( t ) is equal to the area under the curve x(a)h(t - a). This product is
obtained by multiplying the graph of x(a) shown in Fig. 2.5-8 and the graph of
h4(t - a) shown in Fig. 2.5-7 for a particular value o f t . As t increases, it is seen
from Fig. 2.5-7 that the graph of h4(t - o) moves to the right. Thus we begin by
considering the product curve x(a)h4(t - a) for a large negative value of t and
observe any changes of the product curve as the value o f t is increased. For our

I
t-a t 6'

Fig. 2.5-7 Graph of h4(t - a) for example 4.


54 LINEAR TIME-INVARIANT (LTI) SYSTEMS

0 d

Fig. 2.5-8 Graph of x ( c ) for example 4.

example, we note that the product curve is zero for t < 0. This is noted because then,
from Fig. 2.5-8, x((T) = 0 for (T < 0 so the product curve is zero for (T < 0; also, from
Fig. 2.5-7, h4(t - (T)= 0 for (T > 0 so the product curve is zero for 0 > 0. Because
the product curve is zero for all (T, the area under the product curve, which is y(t), is
zero for t < 0. We now consider positive values oft. If t > 0, then the product curve
is as shown in Fig. 2.5-9. The curve is zero for 0 < 0 because x(a) = 0 for (T < 0;
also the product curve is zero for 0 > t because h4(t - 0 ) = 0 for 0 > t. For the
interval 0 < (T < t, the product curve is x(a)h4(t- (T)= e-@"e-@('-") - e-@',which
-

is a constant because we are evaluating the convolution integral for a particular value
oft. Figure 2.5-9 is a graph of x(cr)h4(t - 0)for t > 0. Now y(t) is equal to the area
under the product curve. From Fig. 2.5-9, this is the area of the rectangle, tepa'.
Collecting our results for y(t), we have

(2.5-17)

Using the unit-step function, y(t) can be written in the form

y(t) = te+'u(t) (2.5-18)

Figure 2.5-10 is a graph of y(t).

The above four examples illustrate some of the important techniques that can be
used to evaluate the convolution integral. In all the examples, the function h(t) was
folded and shifted to evaluate the convolution integral. However, the same result

Fig. 2.5-9 Graph of x(a)h,(t - a) for t > 0.


2.5 CONVOLUTION 55
0
- 1.

0.36 -

aY(0
Fig. 2.5-10 Graph of y(t).

would have been obtained if x ( t ) were folded and shifted instead of h(t). To show
this, we begin with the convolution integral, Eq. (2.5-I), which is repeated below:

y(t) = 1
00

-00
x(o)h(t - 0 ) do (2.5- 1)

We now make the change of variable z = t - o. Because t is a constant during the


integration, we have that dz = -do. With this change of variable in Eq. (2.5-I), it
becomes

y(t) = 100

--oo
h(z)x(t - z) dz (2.5- 19)

Note that the difference between Eq. (2.5-1) and Eq. (2.5-19) is that the roles of x(t)
and h(t) have been interchanged. Thus, the same result, y(t), would be obtained if
x(t), instead of h(t), were folded and shifted. This is an important result that can, at
times, be used to advantage in evaluating the convolution integral. To appreciate the
computational difference, you should evaluate y(t) in the four examples given above
by folding and shifting x ( t ) instead of h(t).
Note that the LTI system response to any input can be determined by use of the
convolution integral once the system unit-impulse response is known. It is in this
sense that the unit-impulse response, h(t), completely characterizes the LTI system
mapping of inputs to outputs. Consequently, h(t) is the fundamental function we
shall use to determine and study properties of LTI systems. An understanding of the
techniques and concepts discussed in this and the last section is important for this
development.
56 LINEAR TIME-INVARIANT (LTI) SYSTEMS

PROBLEMS

2-1 Use Eq. (2.1-1) to show that the response of a linear system to the input

in which C, are arbitrary constants is

in which x,(t) + y,(t).

2-2 A system with the input x(t) and corresponding response y(t) is composed of
an ideal amplifier with a gain equal to A which is connected in tandem with a
soft limiter with a characteristic shown in Fig. 1.4-10 in which xo = 5 and
yo = 10. The system input is a class of waveforms for which Ix(t)l < B. For
what values of A and B can the system be modeled as a linear system
irrespective of the order of the tandem connection? What is the model of the
linear system?

2-3 Show that the system described by Eq. (2.1-4) is homogeneous but is not
linear.

2-4 (a) For the input, x(t), and corresponding response, y(t), show that the
system shown in Fig. 2.1-2 is not a linear system if E # 0.
(b) To circumvent this difficulty, define the system input to be h ( t ) =
x l ( t ) - x 2 ( t ) and the corresponding output to be Ay(t) = y l ( t ) - y 2 ( t ) in
which x,(t) -+ y ,(t) and x2(t) + y 2 ( t ) . Show that the mapping of
h ( t ) --+ Ay(t) is a linear mapping. Thus note that we can analyze the
given system using linear analysis by changing the definition of the input
and the output.
(c) Now choose x2(t) = 0. Determine the expression for Ay(t) in terms of
xl(t) and the parameters of the given system. The analysis of linear
electronic circuits in which the dc supply voltages contribute to the circuit
output can be linearized by this procedure.

2-5 A digital computer can only store sequences of numbers. Thus, only sample
values of a continuous waveform can be stored on a digital computer. This is
accomplished by a sampler that samples the waveform every T seconds. For
the input waveform,f(t), the sampler output isf(nT), which are the values of
the waveform, f ( t ) , at the times t = nT for n = 0, f l , f 2 , . . . .
Show that a sampler is a linear, time-varying (LTV) system.
PROBLEMS 57

2-6 The input, x ( t ) , and response, y(t), of a given system are related by the
constant coefficient differential equation

(a) Show that the system is not linear if c # 0.


(b) Now let c = 0. Prove that the given system is linear and time-invariant.

2-7 In an experiment, the response of an LTI system to the input x , ( t ) = r(t) is


y l ( t ) = r(t) - r(t - 1). Determine the system response, y2(t), to the input
x2(t) = r(t/2). Sketch and label all important times and amplitudes of y2(t).

2-8 In an experiment, the response of an LTI system to the input xl(t)= r(t) is
y l ( t )= sin(nt)u(t). Determine the system response, y2(t), to the input
x 2 ( t ) = r(t/2).

2-9 In an experiment, the response of an LTI system to the input x , ( t ) = u(t) is


y l ( t )= sin(nt)u(t). Determine the system response, y2(t), to the input
X f ( 2 ) = r(t) - r(t - 2).

2-10 In an experiment on an LTI system, it is observed that the system response to


the input x l ( t ) = r(t/2) is y l ( t ) shown below.

Determine and sketch the system response to the following inputs:

2-1 1 The response of a given LTI system to the input x J t ) = sin(nt)u(t) is

2(1 - I t [ ) for It1 < 1


otherwise

Use the basic properties of linearity and time invariance to determine and
sketch the system response, yh(t),to the input x,,(t) = sin(nt)r(t).
58 LINEAR TIME-INVARIANT (LTI) SYSTEMS

2-1 2 For the input xl(t) = u(t) of an LTI system, it is observed that the
corresponding output is y l ( t ) = r(t/2). The waveform x2(t) =
+
u(t) 2 CgP=l(-l)"u(t- n) is now used as the LTI system input. Sketch
x2(t) and determine the corresponding system response, y2(t).

2-13 Let g ( t ) = Sketch each of the following functions

2-14 Let x ( t ) = [l - lt1]u(l - It\).


(a) Sketch x ( t ) .
(b) Make a piecewise-constant approximation of x ( t ) as in Fig. 2.3-1 and
show that Eq. (2.3-1) follows but that x'(t) # lim6+o$(t). This example
illustrates that the convergence of a sequence of functions to a given
function does not necessarily imply that the derivative of the sequence of
functions also converges to the derivative of the given function.

2-16 For this problem, instead of defining 6,(t) as a rectangle, define it as the
triangle,

1[
h,(t) = E 1 - ?Iu( 1- Y)
(a) Show that the resulting approximation of x(t), x,(t), connects the values
x(m) by straight lines. Thus show that Eq. (2.3-1) is valid so that d,(t)
defined in this problem also can be used as the basis for the definition of
an impulse.
(b) Note that d,(t) defined in this problem is once differentiable. Sketch &(t).
(c) Use the result of part b to show that, with this definition of S,(t), we have
that if x ( t ) is differentiable, then lim,+o$(t) = x'(t).
PROBLEMS 59

(d) Use the result of part c to show that, equivalent to Eq. (2.4-2), we have

This result is used in mechanics and field theory where $ ( t ) is called the
doublet.

2-17 It is desired to experimentally determine the unit-impulse response of a given


system by using an input rectangular pulse. If the width is sufficiently small,
then, as discussed in the text, the system response is approximately the
system impulse response. In this problem we examine a special case to
observe the effect of the pulse width. For this, let the LTI system unit-impulse
response be h(t) = r(2t) i.i'6)and
+ Y f- let the system input be

Determine the system response, y(t), for the following values of 6:


(a) 6 = 0.4
(b) 6 = 0.1
(c) 6 = $

2-18 An LTI system with the input x ( t ) and corresponding output y(t) is shown
below. Determine and sketch the unit-impulse response, h(t), of the given
system. Label all important amplitudes and times of h(t).

2-19 An LTI system with the input x ( t ) and corresponding output y(t) is shown
below. Determine and sketch the unit-impulse response, h(t), of the given
system. Label all important amplitudes and times of h(t).
60 LINEAR TIME-INVARIANT (LTI) SYSTEMS

2-20 An LTI system with the input x(t) and corresponding output y(t) is shown
below. Determine and sketch the unit-impulse response, h(t), of the given
system. Label all important amplitudes and times of h(t).

2-21 The unit-impulse response of an LTI system is h(t) = Ar(t), where A > 0.
Use convolution to determine the system response, y(t), to the input
0. Sketch y(t) and label the values and times of

2-22 The unit-impulse response of an LTI system is h(t) = r(t).Use convolution to


determine the system respcqse, y(t), to the input x(t) = r(t) - r(t - 1).
Sketch y(t) and label the values and times of all maxima and minima.

2-23 System A is an LTI system with the unit-impulse response h,(t) =


r(t) - r(t - 1). Determine its response y,(t) to the input x(t) = r(t/2).

2-24 The unit impulse response of an LTI system is h(t) = Ae-"u(t). Use
convolution to determine the system response, y(t), to the input
x ( t ) = Be-"u(t). Sketch y(t) and label the values and times of all maxima
and minima.

2-25 The unit-impulse response of an LTI system is h(t) = e-"'r(t/T), where


ci 2 0. Use convolution to determine the system response, y(t), to the input
x ( t ) = 2e-"u(t). Note that ci is the same constant in h(t) and x(t). Sketch y(t)
for the case where ci = 0. Label the values and times of all maxima and
minima.

2-26 The unit-impulse response of an LTI system is h(t) = Ae-"'(t/T), where


T > 0 and ci 2 0. Use convolution to determine the system response, y(t), to
the input x(t) = Bu(t). Sketch y(t) and label the values and times of all
maxima and minima.

2-27 The unit-impulse response of an LTI system is h(t) = r(t).Use convolution to


determine the system response, y(t), to the input x ( t ) = cos(2nt)r(t/2).
Sketch y(t).
PROBLEMS 61

2-28 The unit-impulse response of an LTI system is h(t) = A r ( t / T ) . Determine the


system response, y(t), to the input x ( t ) = sin(wt)u(t) in which w = 27cf for
each of the following values of T .
1
(a) T = -
2f
1
(b) T = -
f
3
(c) T = -
2f
2
(d) T = -
f
2-29 The unit impulse response of an LTI system is h(t) = 2e-'u(t) - &t).
Determine the system response to the input x ( t ) = e'u(-f).

2-30 The unit impulse response of an LTI system is h(t) = B-( 2 21' (>; .
r
Determine the system response, y(t), to the input x(t) = Ar(t).

2-31 Determine the unit-impulse response of the feedback system shown below.

2-32 Use convolution to determine the response of the feedback system of problem
2-31 for the following inputs:
CHAPTER 3

PROPERTIES OF LTI SYSTEMS

3.1 TANDEM CONNECTION OF LTI SYSTEMS

One way to determine system properties is to study the effect of connecting the
system in various ways. For this, we shall study the tandem connection of LTI
systems in this section. A tandem connection of two systems is one in which the
output of the first system is the input of the second system as shown in Fig. 3.1-1.
The two systems are also said to be connected in cascade. In system theory, the
connection of systems is considered not to affect the characteristics of the individual
systems. Thus, in the tandem connection shown in Fig. 3.1-1, the input-output
relation of system S, is not affected by the connection of system S,. Note that the
tandem connection of circuits may not satisfy this condition. For example, consider
the tandem connection of two resistor circuits as shown in Fig. 3.1-2. The output of
the first circuit is y(t) for the input x ( t ) , and the output of the second circuit is z(t) for
the input y(t). If the second circuit were not connected in tandem, then

= ~ R2 x(t) (3.1-1)
R , +R2

However, with the second circuit connected in tandem, the input resistance (R, R4) +
of the second circuit is in parallel with the resistor R2 of the first circuit so that the
value R2 in Eq. (3.1-1) must be replaced by

(3.1-2)

Thus the output of the first circuit, y(t), is affected by the tandem connection of the
second circuit. We discussed differences between circuit theory and system theory
concepts in Sections 1.3 and 2.1. This is another essential difference between circuit
63
64 PROPERTIES OF LTI SYSTEMS

S
.............................................................................

Sb

Fig. 3.1-1 Two systems connected in tandem.

theory and system theory concepts. However, the two circuits of Fig. 3.1-2 could be
made to satisfy the system definition of a tandem connection by connecting a unity-
gain isolation amplifier between the two circuits. This is sometimes done in circuit
design. Generally, before attempting to apply system concepts to a circuit, it is
important to first determine whether all the system theory definitions are valid for
the given circuit.
For our study of the tandem connection of systems, the first observation to make
concerning Fig. 3.1-1 is that the operation S contained within the dotted lines is a
system because it is a many-to-one mapping of x ( t ) to z(t). The reason is that because
Sa is a system, it is a many-to-one mapping of x(t) to y(t) and because &, is a system,
it also is a many-to-one mapping of y(t) to z(t). Now, a many-to-one mapping of a
many-to-one mapping is a many-to-one mapping, so that the tandem connection, S ,
is a many-to-one mapping of x(t) to z(t). Thus we have shown that S is a system.
We now note that if Sa and Sb are time-invariant (TI) systems (linear or
nonlinear), then S is a TI system. From our discussion in Section 1.2, we can
show that S is a TI system by showing that if for any input we have x(t) + z(t),
then for any time shift to we obtain x(t - to)--+ z(t - to).The arrow is the shorthand
notation introduced in Section 2.1. To show this, we note in Fig. 3.1-1 that
x ( t - to)+ y(t - to) because we are given that Sa is a TI system and also
y(t - to) -+ z(t - to) because we are given that Sb is a TI system. Thus we have
shown that x(t - to) --+ y(t - to) + z(t - to), so that S is a TI system. Observe that
the converse is not necessarily true. That is, if S is TI, it is not necessarily true that Sa
and Sb are TI. As a simple example, consider the case for which Sa is a time-varying
ideal amplifier with the output y(t) = a(t)x(t) and Sb also is a time-varying ideal
amplifier with the output z(t) = b(t)y(t)in which a(t)b(t)= A , where A is a constant.
The output of the system S then is z(t) = b(t)a(t)x(t)= Ax(t) so that S then is an
ideal amplifier with constant gain.
Next, we show that if Sa and Sb are linear systems (time-invariant or time-vary-
ing), then S is a linear system. In accordance with our discussion in Section 2.1, we

.................................... ............................................

Fig. 3.1-2 The tandem connection of two resistor circuits.


3.1 TANDEM CONNECTION OF LTI SYSTEMS 65

show that S is a linear system by showing that if x,( t ) + z1( t )and x 2 ( t ) + z2(t) for
any inputs x, ( t ) and x,(t), then

for any complex constants C, and C,. This is shown by first noting that

because Sa is a linear system. Also,

because s
h is a linear system. Thus we have shown that S is linear because

Observe that the converse is not necessarily true. That is, if S is linear, it is not
necessarily true that S, and s b are linear. As a simple example, consider any case for
which Sa is nonlinear system for which an inverse exists (so that S, is a one-to-one
mapping of x ( t ) to y ( t ) in accordance with our discussion in Section 1.1) and s b is
the inverse of S,. Then z(t) = x(t), so that S is a linear system while S, and S, are
nonlinear systems. A specific example is the case for which the output of S, is
y(t) = x3(t) and the output of s b is z(t) = b(t)]1’3, which is the principal cube root of
its input.
Because the tandem connection of two linear systems is a linear system and also
the tandem connection of two TI systems is a TI system, we conclude that the
tandem connection of two LTI systems is an LTI system. Our concern in this chapter
is only with LTI systems. Thus we shall only consider the case for which both Sa and
s b are LTI systems, so that S in Fig. 3.1-1 is an LTI system. In accordance with our
discussion in Section 2.3, the output of the system S, z(t), can then be expressed as
the convolution of its input, x(t), with its impulse response, h(t), as

y(t) = 1
00

--w
x(o)h(t - ). do (3.1-3)

For convenience, we shall express the convolution integral, Eq. (3.1-3), using the
shorthand notation

The star indicates the convolution integral of the two functions. In this notation, note
that it is the second function, h(t), which is folded and shifted in the integration.
The tandem connection of two LTI systems with unit-impulse responses h,(t) and
h h ( t ) is shown in Fig. 3.1-3. In accordance with our discussion in Sections 2.3 and
66 PROPERTIES OF LTI SYSTEMS

h(t) ....._______._____.__.
,.......... ...................._._.____............... .

x(t) I
~ 4 hdt) h&)
izt

Fig. 3.1-3 Two LTI systems connected in tandem.

2.5, h,(t) and hb(t) completely characterize the two tandem-connected LTI systems.
Thus we should be able to express the unit-impulse response, h(t), of the tandem
connection in terms of only h,(t) and hb(t).To determine this relation, we’ll use the
result developed in Section 2.3 that z(t) = h(t) when x(t) = d ( t ) and the system is
initially at rest. Now, if x ( t ) = d(t), then y,(t) = h,(t). Because z(t) = y,(t) * hb(t),
we then have z(t) = h,(t) * hb(t) when x(t) = d(t). Consequently,

(3.1-5)

It was shown in Section 2.5 [see Eq. (2.5-19)] that the value of the convolution is not
changed if the roles of the two functions being convolved are interchanged. Thus the
convolution integral in Eq. (3.1-5) also can be written in the form

(3.1-6)

Before continuing, observe from Eqs. (3.1-5) and (3.1-6) that

Equation (3.1-7) is a statement that the convolution operation denoted by the asterisk
is commutative. Now, the expression for h(t) in Eq. (3.1-6) would have been
obtained if the order of connecting the LTI systems in Fig. 3.1-3 were reversed as
shown in Fig. 3.1-4. Because the unit-impulse response of the tandem system in Fig.
3.1-3 is the same as that in Fig. 3.1-4, we conclude that the two systems have the
same input-output relation. That is, for the same input, x(t), they both have the same
output, z(t). We shall call this the commutativeproperty ofLTI systems because it is a
consequence of the commutative property of the convolution operation. Note,

’@’hb(t) 1 ha@
Zt

Fig. 3.1-4 Two LTI systems connected in tandem.


3.2 A CONSEQUENCE OF THE COMMUTATIVE PROPERTY 67

however, that the waveforms y,(t) and yb(t) are not the same because
y,(t) = x(t) * h,(t) while yb(t) = x(t) * hb(t). While the systems of Figs 3.1-3 and
3.1-4 theoretically have the same input-output relation, it may not be so in practice.
The reason is, as discussed in Section 2.1, that a physical system can, be modeled as
a linear system only if the amplitude of the system input is less than a certain value.
Thus, in determining the order to use in connecting two physical systems in tandem,
it is important to make certain that, for the order chosen, the maximum amplitude of
the waveform between the systems is within the range for which the second system
can be considered to be linear.

3.2 A CONSEQUENCE OF THE COMMUTATIVE PROPERTY

The commutative property of LTI systems described by Eq. (3.1-7) leads to some
important properties of LTI systems. The important property we shall show and
discuss in this section is that if x(t) + y(t) for a given LTI system with the unit-
impulse response h(t), then x(t) * h,(t) + y ( ~*)h,(t). Also, if the LTI system is
modified so that its unit-impulse response is changed to h(t) * h,(t), then the
response of the modified system to the input x(t) is y(t)* h,(t).
To show these properties, consider the two systems shown in Fig. 3.2-1. Both
systems have the same input, x(t). By the commutative property of LTI systems, the
output, z(t), is the same for both systems. First, from Fig. 3.2-la, the response of the
system with the unit-impulse response h(t) is y (t ) to the input x(t). Now the output,
z(t), in Fig. 3.2-la is z(t) = y(t) * h,(t). Thus the output of the system with the unit-
impulse response h(t) in Fig. 3.2-lb also is z(t) = y(t) * h,(t); its input, however, is
noted to be y,(t) = x(t) * h,(t). We thus note for the LTI system with the unit-
impulse response h(t) in Fig. 3.2-lb that x(t) * h,(t) + y(t) * h,(t).
Figure 3.2-2 is a summary illustration of the results we have obtained. The basic
relation obtained from Fig. 3.2-la is illustrated in Fig. 3.2-2a. Using Fig. 3.2-lb, we
then obtained the relation shown in Fig. 3.2-2b. Also from Fig. 3.2-1, we obtain the
relation shown in Fig. 3.2-2c.
The results we just obtained imply some fundamental relations that will be
developed and examined in this and subsequent sections. We begin by considering

(a) ...............................................................................

I ...............................................................................

(b) yf h,(l) p-y a0

...............................................................................
i

Fig. 3.2-1 Illustration for proof of LTI property.


68 PROPERTIES OF LTI SYSTEMS

I I

ic)

Fig. 3.2-2 Illustrating the derived LTI relations

the case for which h,(t) = u(t). Then, in accordance with the result obtained in
Example 2 of Section 2.5 we have

and

h(t) * h,(t) = h(t) * ~ ( t=) (3.2-3)

Consequently, from the result summarized in Fig. 3.2-2b we have the result that if
+ y(t) for a given LTI system with the unit-impulse response h(t), then
x(t)

As a specific example of this result, consider the case for which the input is
The system response then is the unit-impulse response so that
x ( t ) = d(t).
y(t) = h(t). NOW,

d(o) do = [ 0
1{2
ift<O
if t = 0
ift>O
= u(t) (3.2-5)
3.2 A CONSEQUENCE OF THE COMMUTATIVE PROPERTY 69

In accordance with our discussion in Section 2.4, Eq. (3.2-5) was obtained by using
8 J t ) in the integral and then taking the limit of the result as E + 0. Thus, we have
from Eq. (3.2-4) that

~ ( t+
) f -cc
h(a) d o (3.2-6)

For convenience, we call a system response to a unit step, s(t). We then have from
Eq. (3.2-6) that, for an LTI system,

(3.2-7)

Now note that

d
s’(t) = - s ( t ) = h(t) (3.2-8)
dt

This result suggests a practical method for experimentally determining the unit-
impulse response, h(t), of an LTI system. Normally, it is not practical to determine
h(t) directly. To make such a measurement, the input would have to be a very narrow
pulse. In accordance with our discussion in Section 2.4, if the input were
x(t) = A6,(t), then y(t) M Ah(t) at those values of t for which h(t) can be well-
approximated by a straight line within the &-region about the value of t. To be
specific, consider a case in which the graph of h(t) contains a sinusoidal wiggle
in which the frequency of the sinusoid is 10 kHz. As discussed in Section 1.4, the
sinusoid fundamental period is 100 ps. For y(t) x Ah(t), we would have to choose E
to be much less than 100 ps. Let us choose to be 10 ps. Thus the amplitude of the
input pulse, Ad,(t), is A / &= 105A. As we’ve discussed, no physical system is truly
linear; it can be modeled as a linear system only if the maximum input amplitude is
less than a certain value. Let that value be one for our example. We then would
require A / & d 1. The largest possible value of A we then could choose is A = 1 Ow5.
With such a choice of values for x(t), the system response would be y(t) x 10w5h(t).
Note that the maximum amplitude of y(t) would be rather small. The values I’ve
used are really not unreasonable at all. Now, in practice, noise (which is just some
random fluctuation) is ever-present. Its presence limits what can be done in practice.
(For example, why will a pencil not stand on its point for very long?) It is noise that
prevents the practical implementation of many seemingly reasonable ideas. We’ll not
discuss noise in this text except to note some of the limitations it imposes. For our
example, the presence of noise would make it difficult to observe y(t) because of its
small amplitude. Consequently, except in special cases, the determination of h(t) by
applying a narrow pulse to the system is not a practical method. However, a practical
method suggested by Eq. (3.2-8) is to apply a unit step to the LTI system to obtain
s ( t ) experimentally. Then h(t) can be determined by differentiating s(t), which is
accomplished experimentally by plotting the slope of s(t) versus t. There are other
70 PROPERTIES OF LTI SYSTEMS

Fig. 3.2-3

methods of determining the unit-impulse response of an LTI system, and we shall


discuss these later in this text.
As another illustration of the use of the LTI relations summarized in Figs. 3.2-2,
consider the system shown in Fig. 3.2-3. We shall determine the unit-impulse
response, h(t), the unit-step response, s(t), and also the system response to the
rectangular pulse, r(t/to).A good procedure to follow in analyzing a given system
is to write down the equation for the waveform at the output of each operator. These
waveforms are labeled on the diagram for our example. From the diagram, the
equations for these waveforms are

z , (0 = x(t - to) - x(t - 2t0) (3.2-9a)

(3.2-9b)

(3.2-9d)

and

These equations could be combined into one large equation for the output, ~ ( t )in,
terms of the input, x(t). Even though such an equation might appear impressive, it is
more difficult to work with than the five simple component relations in Eq. (3.2-9).
We thus will work with the component equations directly.
First, to determine h(t), we make use of the fact that ~ ( t=) h(t) when x(t) = S(t).
Then, from Eq. (3.2-9a) we have

Zl(t) = 6(t - to) - d(t - 2t0) (3.2- loa)

so that, from Eq. (3.2-9b) we obtain

Y I ( ~ )= f
-co
zl(u) do = f
-co
[6(o- to)- 6(a - 2t0)] do (3.2- 1Ob)
3.2 A CONSEQUENCE OF THE COMMUTATIVE PROPERTY 71

+(-v
Fig. 3.2-4

The value of y l ( t ) is the area under zl(o) in the interval -co < o d t. From the
graph of zl(o) in Fig. 3.2-4, we observe that the area is zero if t < to. For
to < t < 2t0 we observe that the area is equal to that of the impulse, which is
one. For t > 2t0, the value of the integral is equal to the sum of the areas of the
two impulses, which is 1 - 1 = 0. We thus have

y,(t) =
I 0 for t < to
1 for to < t < 2t0
0 fort > 2t0
(3.2-1 la)

Using the notation established in Section 1.5, this result can be expressed more
compactly as

(3.2-1 lb)

We now determine y2(t). For this determination, we have from Eq. (3.2-9c) that

so that, from Eq. (3.2-9d),

y2(t) = f
--03
z2(o) do = [6(o - 3t0) - h(o - 2to)l do (3.2-12b)

The value of y2(t) is the area under z2(o) is the interval --oo < o d t. From the
graph of z2(rJ) in Fig. 3.2-5, we observe that the area is zero if t < 2t0. For
2t0 < t < 3t0, we observe that the area is equal to that of the impulse which is

Fig. 3.2-5
72 PROPERTIES OF LTI SYSTEMS

minus one. For t > 3t0,the value of the integral is equal to the sum of the areas of the
two impulses which is -1 + 1 = 0. Thus we have

y2(t)=
I 0
-1
0
fort < 2t0
for 2t0 < t < 3t0
fort > 3t0
(3.2- 13a)

Using the notation established in Section 1.5, this result can be expressed more
compactly as

(3.2- 13b)

Finally, y(t) = h(t) because x(t) = s(t), so that from Eq. (3.2-9e) we have

(3.2- 14)

Figure 3.2-6 is a graph of Iz(t).


To determine the unit-step response, s(t), for our example, we could let
x(t) = u(t) in Eqs. (3.2-9) and solve for the corresponding response y(t) = s(t).
However, a better procedure is to use the derived result given by Eq. (3.2-7).
From that result, we obtain

(0 for t d to
for to Q t Q 2t0
(3.2-15)
to - (t - 2t0) = 3t0 - t for 2t0 d t d 3t0
for t 2 3t0

Figure 3.2-7 is a graph of s(t).


We now determine the system response to the input x ( t ) = r(t/to).We could solve
Eqs. (3.2-9) directly for the corresponding response. However, a better procedure is

Fig. 3.2-6
3.2 A CONSEQUENCE OF THE COMMUTATIVE PROPERTY 73

Fig. 3.2-7

to use the basic property of linear systems, which is generalized superposition. For
this, observe that x ( t ) can be expressed as the linear combination of two functions as

(e)
x ( t ) = Y - = u(t) - u(t - to) (3.2-16)

From generalized superposition, the system response, y(t), to the given input can be
expressed as the system response to u(t) minus the system response to u(t - to).The
system response to u(t) is s(t) given above. Using the fact that the given system is
time-invariant, the system response to u(t - to)is s(t - to).Thus the system response
to the given input is

y(t) = s ( t ) - s(t - to) (3.2- 17)

With the use of the graph of s(t) given in Fig. 3.2-7, we obtain the graph of y(t)
shown in Fig. 3.2-8 with the equation given in Eq. (3.2-18).

I
0 for t d to
t - to for to < t d 2t0
y(t) = 5t0 - 2t for 2t0 d t d 3t0 (3.2-18)
t - 4t0 for 3t0 d t d 4t0
0 for t 2 4t0

Note how much insight was obtained concerning the system response to various
inputs and how much effort was saved by making use of the basic properties of LTI
systems we have discussed.

Fig. 3.2-8
74 PROPERTIES OF LTI SYSTEMS

3.3 THE UNIT IMPULSE REVISITED

Our development of the unit impulse in Sections 2.3 and 2.4 has been in terms of a
rectangular pulse with a width equal to E and an area equal to one which we called
d,(t). Using the results obtained in the last section, we can show that d,(t) really
could be defined as any nonnegative pulse with an area equal to one which is
symmetric about t = 0 and with a width that goes to zero as E goes to zero. We
used the rectangle because its use simplified our discussions in the previous sections.
Other forms of the pulse shape are usehl (e.g., Problem 2-16) and so we shall
develop this generalization in this section.
To begin, the hdamental defining property of the unit impulse is Eq. (2.3-4):

h(t) = limh,(t) (3.3-1)


&+O

where

In accordance with Eq. (2.4-l), note that this must be true for any waveform, not just
for h(t). Now define u,(t) to be the integral of s,(t),

(3.3-3)

and define s,(t) to be the integral of h,(t),

(3.3-4)

Then, in accordance with the result obtained in the last section, we obtain

Now, instead of just a rectangle, let d,(t) be defined as any nonnegative pulse with an
area equal to one which is symmetric about t = 0 and a width that goes to zero as E
goes to zero. Then

u(t) = lim u,(t) (3.3-6)


&+O
3.3 THE UNIT IMPULSE REVISITED 75

42 o r2 t

Fig. 3.3-1 Graph of Eq. 3.3-7.

For example, if d,(t) were the rectangular pulse as defined in Sections 2.3 and 2.4,
then

E
fort < --
2
u,(t) =
,s' 6,(0) do =
E + 2t
2E
E
for - - < t < -
2
for t > -
E
E
2
(3.3-7)

A graph of u,(t) as given by Eq. (3.3-7) is shown in Fig. 3.3-1. It is clear from Fig.
3.3-1 that Eq. (3.3-6) is satisfied for this example.
Note that, in general, the total rise of u,(t) is equal to the area of s,(t). If this area
is one, then the total rise is one as shown for our example. Also, if d,(t) is a positive
pulse, then uc(t) increases monotonically from a value of zero to a value of one.
Also, u,(O) = 1/2 because the pulse is symmetric about t = 0. Furthermore, the rise
time of uB(t)is equal to the width of d,(t). Because this width goes to zero as E goes
to zero, we have that the rise time of uE(t)goes to zero as E goes to zero. We thus note
that Eq. (3.3-6) is satisfied if s,(t) is any nonnegative pulse with an area equal to one
which is symmetric about t = 0 and with a width that goes to zero as E goes to zero.
Now, from Section 3.2, the LTI system response to u(t) is s(t). As in Section 2.3,
we assume the LTI system is continuous.' Then, from Eqs. (3.3-5) and (3.3-6) we
have

s(t) = lims,(t) (3.3-8)


&+O

where s,(t) is given by Eq. (3.3-4) and s(t), from Eq. (3.2-7), is

s(t) =
SI, h(O) do (3.3-9)

By differentiation, we then have from Eqs. (3.3-8), (3.3-9), and (3.3-4) that

h(t) = limh,(t) (3.3 - 10)


E+O

' System continuity is discussed in Section 3.7.


76 PROPERTIES OF LTI SYSTEMS

But this is Eq. (3.3-l), which is the fundamental defining property of the unit
impulse. We thus observe that 6,(t) can be any symmetric nonnegative pulse with
an area equal to one and a width that goes to zero as E goes to zero.
An important example of such a pulse is the normal pulse, which is defined as

(3.3-1 1)

It is a bell-shaped pulse with an area equal to one whose width goes to zero as E goes
to zero. Thus it satisfies all the requirements discussed above so that it can be used in
place of the rectangular pulse for 6,(t).
One other useful result that follows from our discussion above is obtained from
Eq. (3.3-6). Although the derivative of u(t) does not formally exist, we can define it
by differentiation from Eq. (3.3-6) and Eq. (3.3-3):

u’(t) = lim 6Jt) (3.3- 1 2)


E+O

But the limit on the right-hand side of this equation is the unit impulse. Thus we can
define

u’(t) = d ( t ) (3.3- 13)

With this result, we can define the derivative of a discontinuous function. For
example, consider the function shown in Fig. 3.3-2. The function f ( t ) shown is
discontinuous at t = to. Because it makes a jump of ( B - A ) there, we could express
it as the sum of a continuous function, g(t), and a step function as

f(t)= +(B- - to) (3.3-14)

where g(t) isf(t) with the jump at t = to removed; the jump at t = to is expressed by
the step function. Now, differentiating Eq. (3.3-14) and using the result given by Eq.
(3.3-13), we have

f ’ ( t ) = g’(t) + (B - A)6(t - to) (3.3- 1 5)

>f ,
Fig. 3.3-2 Example of a discontinuous function.
3.4 CONVOLUTION REVISITED 77

We thus observe that if a functionf(t) is discontinuous at a point t = to, then the


derivative off(t),f’(t), can be defined to contain an impulse at that point with an
area equal to ( B - A ) = [ f ( t o + )-f(to-)].

3.4 CONVOLUTION REVISITED

The results obtained in the last two sections can be used to simplify many calcula-
tions. Also, the simplifications obtained often can be used to gain a better under-
standing of the equations involved. Some of these simplifications will be illustrated
in this section.
Let us consider the following convolution:

y’(t) = h(t) * x’(t) (3.4- 1)

Then, using the results of Section 3.2, we have

(3.4-2)

where

(3.4-3a)

and

~ ( t=) J’
-cc
do
~’(0) (3.4-3b)

We thus note that, instead of performing the convolution of h(t) and x ( t ) to obtain
y ( t ) , we could convolve h(t) and x’(t) to obtain j ( t ) and then integrate j ( t ) in
accordance with Eq. (3.4-3a) to obtain y(t). In some cases, this differentiation
procedure leads to a great simplification of the calculation.
As an illustration, consider the system with the input x(t) and unit-impulse
response h(t) shown in Fig. 3.4-1. The output, y(t), could be calculated directly
with the use of Eq. (3.4-2). For this example, however, it is simpler to determine

Fig. 3.4-1 The functions x ( t ) and h(t) for the example.


78 PROPERTIES OF LTI SYSTEMS

Fig. 3.4-2 Graphical representation of x‘(t).

y(t) by the differentiation procedure described above. For this we differentiate x(t)
using the result given by Eq. (3.3-13).
Observe that x(t) can be expressed in the form

x(t) = Au(t) - Au(t - 2 ) (3.4-4)

Thus, using the result given by Eq. (3.3-13), we have

x’(t) = A6(t) - A6(t - 2) (3.4-5)

which is shown graphically in Fig. 3.4-2.


Note that the integration of d ( t )using Eq. (3.4-3b) results in the correct x(t). It is
good procedure to perform this check. Not only does this check your differentiation,
but also you could lose a constant in the differentiation because the derivative of a
constant is zero. The constant would be nonzero if x ( t ) contains a dc component. In
such a case, the constant would be equal to the value of the dc component of x ( t ) .
We now use Eq. (3.4-1) to determine y’(t):

y’(t) = h(t) * x’(t)


= h(t) * [A6(t) - A6(t - 2)] (3.4-6)
= Ah(t) * 6(t) - Ah(t) * 6(t - 2 )

Now, fiom the result of Example 1 of Section 2.5 and using the commutative
property obtained in Section 3.1, we have

h(t - to) = h(t) * 6(t - to) (3.4-7)

With this result, the evaluation of the convolutions in Eq. (3.4-6) is

y’(t) = Ah(t) - Ah(t - 2) (3.4-8)

A graph of y’(t) shown in Fig. 3.4-3 is now easily obtained using the graph of h(t)
given in Fig. 3.4-1.
We now integrate y’(t) in accordance with Eq. (3.4-3a) to obtain y(t). This is
easily done graphically because y(t) is just the area under y’(o) in the interval
-m < r s < t . A graph of y(t) is shown in Fig. 3.4-4.
3.4 CONVOLUTION REVISITED 79

Fig. 3.4-3 Graph of y’(t).

To realize how much effort was saved in this example, it would be worthwhile for
you to determine y(t) by actually convolving h(t) with x ( t ) . The advantage of this
technique is that convolution was reduced to the convolution with impulses, which is
particularly simple. When differentiation does provide such a reduction, the proce-
dure just illustrated is worth considering. Also note that we convolved x‘(t) and h(t)
in our example. Notice, however, that we would have obtained the same result if we
had convolved h’(t) and x ( t ) instead. Thus the function to choose to differentiate is,
of course, the one that results in the simplest calculation.
The differentiation technique also can be used in some cases to obtain a differ-
ential equation that relates the LTI system response, y(t), and the system input, x ( t ) .
To illustrate this technique, consider an LTI system with the unit-impulse response

In accordance with the results developed in this section, we have

y’(t) = x(t) * h’(t) (3.4- 10)

Now, using the result given by Eq. (3.3-15), we have

To obtain this result, note that h(t) is discontinuous at t = 0 with h(0-) = 0 and
h(O+) = A . Now, by substituting Eq. (3.4-9) into Eq. (3.4-1 l), we obtain

h’(t) = A 6 ( t ) - bh(t) (3.4-12a)

Fig. 3.4-4 Graph of y(t).


80 PROPERTIES OF LTI SYSTEMS

or, by rearranging terms, we obtain

h’(t) + bh(t) = Ad(t) (3.4-12b)

We thus have obtained a differential equation which h(t) must satisfy. Now by
substituting Eq. (3.4-12a) in Eq. (3.4-lo), we obtain

y’(t) = Ad(t) * x(t) - bh(t) * x(t) (3.4-13)

Because x ( t ) = x(t) * d ( t ) and y(t) = h(t) * x(t), we obtain from Eq. (3.4-13) the
differential equation

y’(t) = Ax@) - by(t) (3.4- 14a)

or, by rearranging terms, the desired differential equation relating the output, y(t),
and input, x(t), is

y’(t) + by(t)= Ax@) (3.4-14b)

It should be noted that in the solution of Eq. (3.4-14b), the following condition must
be used: If x(t) = 0 for t < to , then y(t) = 0 for t < to. This condition follows from
the fact that h(t) = 0 for t < 0, as seen from Eq. (3.4-9).
If the system input, x(t), is the unit impulse, d(t), then the system response, y(t), is
the unit-impulse response, h(t). If this is substituted in Eq. (3.4-14b), we obtain Eq.
(3.4-12b). The unit-impulse response, h(t), is called the fundamental solution of Eq.
(3.4-14b) because it is the solution of the differential equation when the input is a
unit impulse. Note that h(t) is not the homogeneous solution because y(t) is defined
to be the homogeneous solution when the input, x(t), is zero for all time.

3.5 CAUSALITY

Causality is a concept that there is relation between a cause and an effect in which
the cause precedes the effect. It is presently believed that all physical systems are
causal. This seemingly simple concept is not always obvious. For example, a
problem with which the ancient Greeks grappled was the following: Achilles ran
a foot race and received a prize for winning the race. Now, it was argued, it was the
future event of receiving the prize which caused Achilles to run and win the race,
and so this is an example of noncausality. Those who argued that this is an example
of noncausality missed the crucial point that Achilles only ran and won the race
because he was told before the race that there would be a prize waiting for the winner
at the finish line and Achilles believed it. It is possible that a prize was not really to
be awarded and that Achilles was misled. So the real cause of Achilles running and
winning the race was not the prize but rather the belief he held before running the
race that a prize would be awarded the winner of the race. This example, although
3.5 CAUSALITY 81

simple, is sufficient to show that the causal relation can be subtle. The subtleties of
causality have been explored by many philosophers. A good classic discussion of
this topic is contained in Treatise of Human Nature by David Hume.2
For systems in which the system input and output are functions of time, a system
is dejined to be causal iJ; at any time, the output does not depend on future values of
the input.There is nothing in physics that requires all physical systems to be causal.
That is, there is no concept in basic physics that requires the relation between the
input and output of every physical system to be causal. Rather, it suits our social
philosophy to believe that all physical systems are causal, and this belief is rein-
forced by the fact that every system observed to date can be modeled as a causal
system. If a noncausal system could be constructed, then the output at any time
would contain information about the future of the input so that the system could be
used to predict at least some of the future of its input. Thus, if the input were your
speech waveform, then the system could be used to predict some of what you will
say in the future. But that would imply you do not have the free will to say whatever
you want and whenever you want in the future! Thus, the belief in noncausal
physical systems leads to fundamental questions about the existence of free will.
Without free will, such desirable social concepts as ethics and morality become
questionable. We thus believe in free will, and this belief leads us to assume that
all physical systems are causal. As I stated above, every system observed to date can
be modeled as a causal system so that a counterexample to our belief does not seem
to exist. This reinforces our belief that all physical systems are causal. Although we
cannot prove that all physical systems are causal, we shall prove in our discussion of
passive systems in Section 8.4 that all passive linear systems must be causal.
Causality is important in system theory for two main reasons. The first stems
from the desire to know the constraints causality imposes on a given system so that
one can know if a given theoretical model can be the model of a physical system.
The second reason stems from a desire to know the theoretical best that can be done
in certain situations by a noncausal system. There are limitations of the best that can
be done in many problems in communication theory such as filtering noise from a
signal and in control theory such as controlling a given system by designing a
system to be placed in feedback. One of the sources of the limitation is due to
the requirement that all the designed systems be causal. It is then of interest to
know whether the performance could be improved significantly if the causality
constraint were removed and, if so, in what manner was the future of the input
used to obtain the improvement. With the understanding that derives from such
knowledge, new strategies can sometimes be devised to improve performance
with the use of causal systems.
We have seen that the unit-impulse response, h(t), completely characterizes the
input-output mapping of an LTI system. Thus whatever constraints causality
imposes on an LTI system must be reflected in a constraint on its unit-impulse
response. For this determination we require a more formal definition of causality.
We defined a system to be causal if, at any time, the output does not depend on

'David Hume was a Scottish philosopher who lived from 171 1 to 1776.
82 PROPERTIES OF LTI SYSTEMS

future values of the input. Let to be some arbitrary time. Our definition of causality
then can be translated into the statement that a system is causal iJ; for any value of to,
y(to)does not depend on x(t) for t > to.
To determine the constraint causality imposes on h(f), we begin with the expres-
sion for y(to) obtained from the convolution integral:

(3.5- 1a)

The integral over all a in Eq. (3.5-la) has been expressed in Eq. (3.5-lb) as an
integral for a < to plus an integral for a > to.The reason for expressing the integral
in this manner is that x(a) for a > to are future values of the input as discussed
above, while x(a) for a < to are past and present values of the input. Thus the first
integral in Eq. (3.5-lb) involves only past and present values of the input, while the
second integral involves only fbture values of the input. If the output is not to depend
on future values of the input, then the value of the second integral must be zero for
any input. We thus note that the LTI system is causal if and only if

(3.5-2)

for any input, x(t). This can be satisfied if and only if h(t, - a) = 0 for a > to. First,
the restriction that a > to is because the integration is only over that range of a.
Now, it is clear that if h(t, - a) = 0 for a > to, then I , the value of the integral in
Eq. (3.5-2), is zero for any input, x(t). To see that I is zero only if h(to - a) = 0 for
a > to,assume that this condition is not satisfied. Then because the integral must be
zero for any input, we choose the input to be

x ( 4 = sgn[Wo - 4 1
1 if h(to - a) > 0 (3.5-3)
-1 if h(to - a) < 0

where sgn[.] is the signum function defined by Eq. (1.4-29). With this choice of the
input, we note that x(a)h(to- a) = Ih(to - a)[ so that

(3.5-4)

The value of an integral is just the area under the function being integrated. Because
the function being integrated in Eq. (3.5-4) is never negative, the value of the
integral, I , is zero only if the function being integrated is zero. We thus have
3.5 CAUSALITY 83

shown that Z = 0 for any input, x(t), if and only if h(to - a) = 0 for rs > to. Note that
this condition is that the unit-impulse response be zero if its argument is negative
because to - a < 0 for (T > to. Thus we have shown the following:

An LTI system is causal if and only if h(t) = 0 for t < 0.

Remember that h(t) is the LTI system response to the unit impulse, d(t),so that t = 0
for h(t) is the instant the unit impulse is applied. In a sense, the causality condition
for an LTI system is that the system cannot scream before it is kicked. It is clear that
this condition is necessary even for systems that are not LTI because if the system
started to scream, say one second before it is kicked, then it is predicting that it will
be kicked one second later. This, as I discussed above, implies that one does not have
the free will to kick it whenever desired. We have shown that this condition also is
sufficient to ensure that an LTI system is causal. If a system is not LTI, then this
condition is not sufficient. To see this, consider the nonlinear noncausal system
shown in Fig. 3.5-1. As seen from the diagram, the system output is

y(t) = x(t)x(t +T) (3.5-5)

so that the system is noncausal if T > 0. However, we observe from Eq. (3.5-5) that
y(t) = 0 for t < to if x(t) = 0 for t < to, so that the noncausal system depicted will
not scream before it is kicked. Thus we note that the condition that a system not
scream before being kicked is a necessary condition for any system (linear or
nonlinear, time-invariant or time-varying) to be causal. However, if the system is
LTI, then it is both a necessary and sufficient condition for the LTI system to be
causal. Another useful expression of this condition is as follows: An LTZ system is
causal if and only if for any value of to and any input that is zero for t < to (that is,
x(t) = Ofor t < to), we obtain an output, y(t),for which y(t) = 0 for t < to. Note
again that this statement holds only for LTI systems.
In terms of our discussion above, we can give a more physical interpretation of
the unit-impulse response. Consider first the system described in Example 3 of
Section 2.5. The LTI system unit-impulse response given by Eq. (2.5-9) is
h(t) = Kr(t/T), and the system response given by Eq. (2.5-12) is

The output at any time, t, is K times the integral of the input over the last T seconds.
If K = 1/T, then the output at a time t is the average of the input over the last T
seconds. For our example, note that values of the input more than T seconds in the

Fig. 3.5-1 Example of a noncausal system.


84 PROPERTIES OF LTI SYSTEMS

0 1

Fig. 3.5-2 Graph of h(t) given by Eq. 3.5-7.

past have no effect on the output. In a sense, we can say that the system only
remembers the last T seconds of the input.
In general, for a causal LTI system, the unit-impulse response is simply a graph of
the weighting over the past of the input used to produce the output. To see this more
clearly, consider the causal LTI system with the unit-impulse response

h(t) = e P u ( t ) , CI >0 (3.5-7)

and shown in Fig. 3.5-2. This is the unit-impulse response of the LTI system of
Example 4 in Section 2.5. The output at a time, to, is given by

(3.5-8)

A graph of h(to - a) versus c is shown in Fig. 3.5-3. From Eq. (3.5-8),this is seen to
be the weighting of the past of the input used to produce the output at the time to. We
see from Fig. 3.5-3 that, as an input value recedes into the past, its influence on the
output decreases exponentially. In a sense, we can say that the system has a memory
that decays exponentially in time. In this sense, we can interpret the unit-impulse
response of an LTI system as a graph of the system memory. With this view, an LTI
system with the unit-impulse response h(t) = u(t) has infinite memory.

Fig. 3.5-3 Graph of h(to - a).


3.6 STABILITY 85

3.6 STABILITY

There is no one definition of system stability. The reason is that the stability of a
system is considered relative to a particular concern about the system. Thus the
definition of stability used is one that is meaningful relative to the particular concern.
Thus different concerns dictate different definitions. For example, consider a ball
that is at rest at the bottom inside a bowl. If the ball is hit with a small force, the ball
will just roll up the side of the bowl a bit, roll about the bowl, and finally settle back
at the bottom of the bowl. On the other hand, if the ball is hit hard, the ball will roll
up the side and over the edge of the bowl, never to return. For this situation, a
meaningful definition of stability is a local one in which the system is considered to
be stable if the ball will eventually return to the bottom of the bowl if it is initially
perturbed less than a certain amount. This system is then said to be locally asymp-
totically stable because it eventually will return to its initial quiescent state. With this
definition, the system would be unstable if the bowl were turned upside down and
the ball placed on top of the bowl. The mathematical theory for the study of
asymptotic stability is called stability in the sense of Lyapunov (often abbreviated
as stability i . ~ . i . )It. ~is used in discussing many autonomous systems such as the
stability of an electron in orbit about an atomic nucleus and also the stability of a
planet in orbit about the sun. The theory for i.s.1. stability is developed using the
state-space description of a system discussed in Section 10.6. However, stability i.s.1.
is not very useful in the discussion of nonautonomous systems which is our major
concern in this text. For a definition of stability to be useful, it must not only be
meaningful relative to our concerns, but it also must be useful. This requires that the
definition leads to analytical techniques that can be used without undue effort. Of the
various possible definitions for nonautonomous systems, the definition of stability
that we shall use is the BIBO stability criterion:

A system is defined to be stable in accordance with the BIBO stability criterion if the
system response to any bounded input waveform is a bounded output waveform (hence
the abbreviation BIBO).

First, to understand this criterion, we need to understand exactly what is meant by


a bounded waveform. A waveform, f ( t ) , is said to be bounded if there is some
positive number, M , such that If(t)l d M . That is, the magnitude off(t) is less
than or, at most, equal to A4 for any value oft. Let us consider some examples:

+
1. The waveform f i ( t ) = A cos(ot 4) is a bounded waveform because
-[AI < f i ( t ) < IAl or, equivalently, Ifi(t)l d IAl.
2. For a # 0, the waveform h(t)= d' is not a bounded waveform because
f;(t) + 00 as t + 00 if a > 0 andh(t) + 00 as t + -00 if a < 0.

Named in honor of the Russian mathematician A. M. Lyapunov, who first published his studies in 1892.
Since then, this definition of stability has been extensively studied. Many texts and articles in the
cngincering and mathematical journals have been published on Lyapunov stability.
86 PROPERTIES OF LTI SYSTEMS

3. For a < 0, the waveform h ( t ) = e%(t) is a bounded waveform because


Ih(t)l 6 1.
4. The waveform A ( t ) = d(t) is not a bounded waveform because there is no
positive number M such that Id(t)l 6 M . Note that Id,(t)l 6 1 / and
~ that, for
any chosen value of M , E can be chosen to be less than 1/M so that
Id,(t)l > M for It1 < ~ / 2 .

Observe that the BIBO stability criterion only requires that the system response to
any bounded input waveform be a bounded output waveform. The system is not
BIBO-stable if only one bounded input waveform results in an output that is
unbounded at just one instant of time. Thus a system is or is not BIBO-stable; a
system cannot be conditionally BIBO-stable. Also note that there is no specification
of the system response to an unbounded input waveform. There is only a specifica-
tion of the system response to a bounded input waveform. In a sense, a system is
BIBO-stable if it is not explosive. That is, an unbounded response is not obtained for
a bounded input as occurs in an explosion.
Because the unit-impulse response, h(t), of an LTI system completely determines
its input-output mapping, we expect that the required conditions for the BIBO-
stability of an LTI system can be specified in terms of required conditions on
h(t). This indeed is the case, and we shall show that an LTI system is BIBO-stable
ifand only i f

(3.6-1)

For this proof, we need the basic inequality that the magnitude of the area under a
curve is not greater than the area under its magnitude-that is,

(3.6-2)

We also need the basic identity shown in Appendix A, Eq. (A-26), that the magni-
tude of a product of two quantities is equal to the product of their magnitudes:

IZIZ2l = IZ, 11~21 (3.6-3)

For our proof, we begin with the basic input-output relation for an LTI system:

y(t) = 100

-00
h(z)x(t- Z) dz (3.6-4)
3.6 STABILITY 87

We first show that if Eq. (3.6-1) is satisfied, then the response to any bounded
input is a bounded output. If the input is bounded so that Ix(t)l d M , then from
Eqs. (3.6-2), (3.6-3), and (3.6-4) we obtain

(3.6-5)

But Ix(t - z)l d M so that

Thus, if Eq. (3.6-1) is satisfied, then ~ ( tis) a bounded waveform so that the response
to every bounded input waveform is a bounded waveform.
We now must show that the LTI system is BIBO-stable only if Eq. (3.6-1) is
satisfied. That is, we must show that if Eq. (3.6-1) is not satisfied, then the LTI
system is not BIBO-stable. To show this, we need to show that if Eq. (3.6-1) is not
satisfied, then there exists at least one input waveform for which the magnitude of
the output waveform becomes infinite at least at one instant of time so that it is
not a bounded waveform. For this, we assume that Eq. (3.6-1) is not satisfied
and we choose a bounded input for which y(t) becomes infinite at t = 0. First,
from Eq. (3.6-4), the output at t = 0 is

y(0) = 1 00

-'x
h(z)x(-r) dz (3.6-7)

We now choose the input to be

x(-t) = sgn[h(t)]

where sgn(.) is the signum function defined by Eq. (1.4-29) so that

x(-t) =
I 1
0
-1
if h(t) > 0
if h(t) = 0
if h(t) < 0

The input chosen clearly is a bounded waveform because its magnitude never
(3.6-8)

exceeds one. With this choice of input, we have from Eq. (3.6-7)

(3.6-9)
88 PROPERTIES OF LTI SYSTEMS

so that the output at t = 0 would be infinite if Eq. (3.6-1)is not satisfied. We have
thus shown that an LTI system is BIBO-stable if and only if Eq. (3.6-1)is satisfied.
Let us apply Eq. (3.6-1)to determine the BIBO-stability of some LTI systems.
For the first example, consider an LTI system with the unit-impulse response

h,(t) = A6(t - to) (3.6-1 0)

For this system, we have Ihl(t)l = IA6(t - to)[= IA16(t - to) because, from our
definition of the unit impulse, we have 6(t) 3 0. Thus, from Eq. (3.6-1)we have

100

-cc
00

Ihl(t)l dt = IAl1
-cc
6(t - to)dt = IAl < 00 (3.6-11)

so that the LTI system is BIBO-stable. Note that the BIB0 criterion does not require
h(t) to be a bounded waveform; it only requires that the area under its magnitude be
jnite. The BIBO-stability of the LTI system should, of course, have been obvious
because, from Example 1 of Section 2.5, the response, y l ( t ) of the system is

so that

Thus the LTI system with the unit-impulse response h , ( t ) given by Eq. (3.6-10)is
BIBO-stable and causal if to 3 0; it is BIBO-stable and noncausal if to < 0.
As a second example, consider a causal LTI system with the unit-impulse
response

h2(t) = AeP'cos(o,t + $)u(t) (3.6-14)

where a is a real number. It would be difficult to apply the criterion, Eq. (3.6-l),
because the resulting integral is not easily evaluated. The criterion, however, does
not require the evaluation of the integral. Rather, we need only show that its value is
less than or equal to some positive number. For this, we note that because
Icos(8)I < 1, we have
3.6 STABILITY 89

The last inequality is obtained by noting that the exponential and the unit step are
never negative so that each is equal to its magnitude. Thus

-00
Ih2(t)ldtd [AI lo
00

e-@'dt (3.6- 1 6)

This integral is finite only if a > 0, for which the value of the integral is IAl/a. We
thus note that this LTI system is BIBO-stable if a > 0.
The stability proof given above cannot be used for a < 0. The reason is that Eq.
(3.6-16) is just an upper bound to the value of Eq. (3.6-1). Thus, if the upper bound
in Eq. (3.6-16) is infinite, the value of Eq. (3.6-1) could be finite or infinite. To
examine the case for which a < 0, we need to obtain a lower bound of h2(t).For this,
first consider the case for which a = 0. For this case,

h2(t) = A COS(WOt + 4)u(t) (3.6- 17)

so that

--M
Ih2(t)l dt = IAl loI
00

cos(oot + 4)l dt = co (3.6- 1 8)

The value of the integral is infinite because the cosine is a periodic waveform so that
its magnitude is a periodic waveform that is never negative. Thus the value of the
+
integral is the area under the curve, I cos(oOt 4)1, over one period times the
number of periods of the waveform. The area under the curve over one period is
a finite positive number, but there are an infinite number of periods in the interval
0 < t < 00 so that the value of the integral is infinite. The LTI system thus is not
BIBO-stable if a = 0.
The case for which c1 < 0 can now be resolved because for this case a lower
bound is easily obtained. Note that for a < 0

because e-" 3 1 for t 3 0 for this case. Thus

Thus we have shown that the LTI system with the unit-impulse response h2(t) given
by Eq. (3.6-14) is BIBO-stable only if a > 0. The frequency ooand the phase 4 do
not aftect the BIBO-stability of the system. For the special case in which oo= 0 and
4 = 0, we have that the causal LTI system with the unit-impulse response
h(t) = Ae-@'u(t) (3.6-21)
90 PROPERTIES OF LTI SYSTEMS

is BIBO-stable only if CI > 0. It is not stable, for example, if a = 0. For the case
a = 0, the unit-impulse response is a step function so that the causal LTI system with
the unit-impulse response h(t) = Au(t) is not a BIBO-stable system. Note that this
system has, in terms of our discussion in Section 3.5, infinite memory. Observe that
a BIBO-stable system cannot have infinite memory because then the stability condi-
tion given by Eq. (3.6-1) would not be satisfied.
As we have seen, the BIBO-stability of an LTI system can be determined by use
of Eq. (3.6-1) if the unit-impulse response, h(t), is known. If the unit-impulse
response is not known or if the system is not LTI, then one must resort to the
basic definition of BIBO-stability to determine whether the system is BIBO-
stable. In this latter case, one must show either that the system is BIBO-stable by
showing that every bounded input results in a bounded output or show that the
system is not BIBO-stable by showing that there is at least one bounded input for
which the output is unbounded. For example, consider the LTI system for which
d
y ( t ) = -x(t). In accordance with the result obtained in Section 3.3, Eq. (3.3-13), the
dt
response of this system to the bounded input x(t) = Au(t) is the unbounded output
y ( t ) = Ad(t) so that the ideal differentiator is not a BIBO-stable system.

3.7 SYSTEM CONTINUITY

The derivation in Section 2.3 of the convolution integral, Eq. (2.3-4), and also our
discussion in Section 3.3 required that the LTI system be a continuous system. That
is, we had to require that if xl(t) + y l ( t ) and x2(t)+ y 2 ( t ) , then the difference
between the two outputs, b2(t)- y l ( t ) ] ,goes to zero as the difference between the
two inputs, [x2(t)- x l ( t ) ] ,goes to zero. We shall show in this section that an LTZ
system is a continuous system if and only if it is BIBO-stable.
We begin by defining the waveform differences:

(3.7-1)

and

Then, by the superposition property of linear systems, we have that AJt) is the
system response to the input Ax(t). That is,

(3.7-3)
3.7 SYSTEM CONTINUITY 91

Because we are considering the case in which A,(?) goes to zero, we let it be a
bounded waveform with IAx(t)l < M, in which M, goes to zero as A,(t) goes to zero.
Then, similar to Eq. (3.6-6), we have that

(3.7-4)

Now, if the LTI system is BIBO-stable, then

(3.7-5)

so that

We thus note that 1AJt)I goes to zero as M, goes to zero. That is, we have shown that
the difference in the outputs, AJt), goes to zero as the difference in the inputs, Ax(t),
goes to zero if the LTI system is BIBO-stable.
To show that this is true only if the LTI system is BIBO-stable, we must show that
if the LTI system is not BIBO-stable, then there exists some input x2(t) such that the
difference A,(?) goes to zero and yet the output difference, AJt), does not go to zero
at least at one instant of time. For this we choose the time instant to be t = 0 so that
00

(3.7-7)

Now choose the input xZ(t)to be

(3.7-8)

where E > 0 and, similar to Eq. (3.6-8), we choose

1 if h(t) > 0
g(-t) = sgn[h(t)] = (3.7-9)
-1 if h(t) < 0

Then, from Eq. (3.7-1) we obtain

Ax(-t) = Eg(-t) = E sgn[h(t)] (3.7- 10)

Substituting into Eq. (3.7-7) we have, similar to Eq. (3.6-9),

(3.7-1 1)
92 PROPERTIES OF LTI SYSTEMS

If the LTI system is not BIBO-stable, the integral in Eq. (3.7-11) diverges and so
AJO) is infinite no matter how small is E so that AJO) does not go to zero as A,(t)
goes to zero. Thus we have shown that an LTI system is continuous if and only if it is
BIBO-stable.
As an illustration, consider the ideal integrator, which was discussed as Example
2 in Section 2.5. The unit-impulse response of this LTI system is h(t) = Ku(t); and
from Eq. (2.5-8), its response, y(t), to the input x ( t ) is

y(t) = K f-, .(a) do (3.7- 1 2)

We showed near the end of Section 3.6 that this system is not BIBO-stable so that, in
accordance with our result in this section, it is not a continuous system. This can be
illustrated for this system by considering the input

x(t) = A U ( I I _ ? , where T > 0


T T
(3.7-1 3)

lo otherwise

Note that x ( t ) + 0 as T + 00. If the system were continuous, we would expect


y(t) + 0 as T + 00. However, from Eq. (3.7-12), the output fort 3 0 isy(t) = KA
no matter what the value of T . Clearly then, this system is not continuous.
Our derivation of the output of an LTI system as the convolution of its input with
its unit-impulse response required that the system be continuous. Thus the use of the
convolution integral to relate the output and input of an LTI system that is not BIBO-
stable is not valid. However, there is a subterfuge that can be employed to validate
the use of the convolution integral in certain situations for unstable LTI systems. For
causal physical systems, the condition for stability, Eq. (3.7-5), is not met because
the system unit-impulse response, h(t), does not go to zero or does not go to zero fast
enough as t + 00. For example, the LTI system with the unit impulse response
h(t) = eafu(t)is not stable for CI 3 0. However, observe for this system that for any
b i t e value of to we have

f-,
til 1
e" dt = -[eato- 11 < 00
CI
(3.7-14)

Normally for causal physical systems, as in the example above, we would have that
for any finite value of to

(3.7-15)
3.8 THE POTENTIAL INTEGRAL 93

Now, in accordance with the convolution integral, the output at the time t = to of a
causal physical system due to an input, x(t), which is zero for t < 0, is

y(to) = 100

-00
h(z)x(to- z) d z = (3.7- 16)

The lower limit of the integral is zero because h(t) = 0 for t < 0 and the upper limit
of the integral is to because, for the class of inputs we are considering, x(to - z) = 0
for z > to. Thus the system response at t = to involves h(t) only for t d to. Thus, if
we define

= { h(t) for t d to
0 for t > to
(3.7- 17)

Then we can express the system response at t = to as

(3.7- 18)

and the system with the unit impulse response ho(t)is BIBO-stable because

100

J --oo
Iho(t)ldt =
J
t0

[
-aJ
Ih(t)l dt = [
t0

Jo
Ih(t)l dt < 00 (3.7- 19)

In this manner the response at any finite time, to, of an unstable causal physical
system to any input that is zero for t < 0 can be considered to be the response of a
causal and stable LTI system, and so the convolution integral is valid for any finite
value o f t .

3.8 THE POTENTIAL INTEGRAL

Our specific interest in this text is LTI systems. However, with a slight change of
viewpoint, the theory of LTI systems can be applied to many different fields of study.
Often, the concepts and theories developed for a given field of study are found, with
some small modifications, useful for other fields of study. This transference of
concepts and theories from one scientific field to another is very powerful because
it unifies many fields of scientific study and the different viewpoint often results in
new insights. This is one of the reasons why an individual should attempt to be
educated in several fields of study. An individual with such an education can then
use the concepts and approaches in one field of study to develop new ways of
thinking in another field of study. However, this can be accomplished only with a
basic understanding of the concepts and theories developed in the given field. It is
not sufficient to just know how to solve problems in the given field using the derived
94 PROPERTIES OF LTI SYSTEMS

formulae. This is one of the reasons why a basic discussion of the concepts and
theories of LTI systems are presented in this text.
As an illustration, we shall, with a slight change of viewpoint, use the concepts
and the theory we have developed for the study of LTI systems to analyze the
potential distribution in free space due to a given charge density distribution. Free
space is space with nothing else present. In electrostatics, a charge density distribu-
tion in space, p ( p ) , results in a potential, 4 ( p ) , to exist in space which varies with
position, p . The basic equation governing this relation, called the Poisson equation,
is

v24= p (3.8- 1a)

This is a shorthand notation for a differential equation. In rectangular coordinates the


differential equation is

(3.8- 1b)

Various methods for solving this linear differential equation are presented in texts
that discuss electrostatics. By one method, the potential is obtained by an integration
of the charge density called the potential integral. By using the concepts we have
developed for LTI systems, we shall obtain this integral without solving the Poisson
equation.
For our development, we view the problem as a system with the input being the
charge density distribution, p ( p ) , and the output being the potential, 4 ( p ) . Note that
the input and the output in our defined system are functions of position, p , and not
time. We first show that the system is linear. This is shown by noting that if a certain
charge, q l , results in the potential distribution 41,and some other charge, q2,results
+
in the potential distribution &, then the charge q = clql c2q2 will result in the
+
potential distribution 4 = clg51 ~ ~That4is, the ~ mapping
. of inputs to outputs is
a linear mapping.
Now let the potential distribution in free space due to a charge located at p = p 1
+
be 4 ( p ) . If the charge is moved from p , to a new position, p 2 = pI d, then the
+
potential distribution due to the charge in its new location will be 4 ( p d). That is,
the potential distribution will remain the same except for being translated in space by
the same amount and direction as was the charge. Thus we observe that the mapping
of inputs to outputs is shift-invariant. Note that because the input and output of our
defined system are functions of position only, we have shift invariance instead of
time invariance. Thus our system is a linear shift-invariant (LSI) system.
The expression for the output of an LTI system is the convolution of the input
with the system unit-impulse response. Thus, by replacing position for time we have
that the output potential distribution of our LSI system will be equal to the spatial
convolution of the input charge density distribution with the LSI system unit-
impulse response.
3.8 THE POTENTIAL INTEGRAL 95

The unit-impulse response of an LTI system is, as we discussed, the system


response to a positive pulse that occupies an infinitesimal region of time about the
origin and for which the value of the integral over the pulse is one. By analogy, the
LSI system unit-impulse response is the potential distribution due to a positive
charge density which occupies an infinitesimal region in space about the origin
and for which the value of the integral over the charge density is one. This is the
unit-point charge, which is a charge with q = 1 and which occupies an infinitesimal
region of space about the origin of the coordinate system.
Thus, from our development in this text, the potential distribution in free space
due to a given charge density, p ( p ) , is equal to the spatial convolution of the charge
density with the potential distribution, q50(p), due to a unit-point charge. Thus the
potential distribution due to some given charge density distribution, p ( p ) , is the
spatial convolution,

(3.8-2)

The function q50(p) is called a Green’s function. In our terminology, a Green’s


function is simply the system unit-impulse response.
From basic physics, the potential distribution due to a unit-point charge is

1
q5o(r) = (3.8-3a)

where E is the dielectric permittivity of the space and r is the distance from the unit-
point charge. In rectangular coordinates, this equation is

so that the expression for the convolution in rectangular coordinates is

4(x, y , z ) = 111
p(x’, y’, l ) 4 0 ( x - x’, y - y’, z - z’) dx’dy’dz‘

where the integration is over all space. Clearly, if the charge density is nonzero in
only some region in space, then the only nonzero contribution of the integration is
over that region in which the charge density is nonzero. The integral is called the
potential integral. Note that we were able to obtain this result without solving the
Poisson equation by using the concepts of LTI we have developed.
Clearly, causality is not a meaningful concept because we are concerned with
position and not with time in our charge-potential system. We will not discuss the
96 PROPERTIES OF LTI SYSTEMS

evaluation of this integral or hrther interpretations of it such as its extension, by the


use of images, to spatial regions which contain conductors. The essential reason for
deriving the potential integral in this section is to illustrate how, with a slight change
of viewpoint, our theory of LTI systems can be transferred to some other scientific
fields.

PROBLEMS

3-1 System A is an LTI system with the unit impulse response h,(t) =
r(t) - r(t - 1). System B is the tandem connection of two systems A as
shown below.

1
B
............................... _......__.___.
...._

Determine hb(t),the unit-impulse response of system B.

Use the results of Section 3.1 to show the following LTI system relations.
Also draw the equivalent block diagrams.
(a) [ha(t) * hb(t)l * = ha(t) * [hb(t) * hc(t)l
(b) C[ha(t) * hb(t)l = [Cha(t)l * hb(t) = h a ( t ) * [Chb(t)l
(c) ha(t) * + hc(t)l = * + [ha(t) * hc(t)l
(dl + * hc(t) = [ha(t) * hc(t)l [ h b ( t ) * +
(e) h a w * W )= h a w
(f) ha(t) * = * ha(t)
In mathematics, these six properties define a commutative algebra with a
unity element. Properties (a) and (b) are called associative properties.
Properties (c) and (d) are called distributive properties. Property (e) is the
statement that there is a unity element, h(t) = S(t). Property (f) is the
commutative property.
The importance of this result is that the whole mathematical theory of
commutative algebras with a unity element can be directly applied to LTI
system theory. One direct application of the above six properties is block
diagram reduction, which is discussed in Chapter 10.

3-3 The unit-step response of a given LTI system is

l+t for - l < t < O


- t for 0 < t d 1
elsewhere
PROBLEMS 97

(a) Sketch s(t). Label all important time and amplitude values.
(b) Sketch h(t), the system unit-impulse response.
(c) Is the given system causal? Your reasoning must be given.
(d) Is the given system stable (BIBO)? Your reasoning must be given.

3-4 System A is an LTI system with the unit-impulse response h(t). System A is
connected in tandem with an ideal delay system with a delay equal to T
seconds.
(a) Show that the ideal delay system is an LTI system.
(b) Use the discussion in Section 3.2 to prove for system A that if
x ( t ) + y(t), then x(t - T ) -+y(t - T ) .
(c) Also show that if the unit-impulse response of system A is changed to
h(t - T ) , then x(t) + y(t - T).

3-5 The unit-impulse response of an LTI system is h(t) = Ar(t/2). Determine the
following:
(a) Its unit-step response, s ( t ) .
(b) Its response to the input x ( t ) = r(t).
(c) Its response to the input x ( t ) = r(t/2).
Note that the solution to this problem can be obtained without using
convolution.

3-6 Let h,(t) = d(t - to) in Fig. 3.2-2, and thus show
(a) y(t - to)= x(t - to)* h(t).
(b) y(t - to) = ~ ( t* )h(t - to).

3-7 (a) Show that d(t) * d(t) = d(t) by evaluating lim,+o d,(t) * d,(t).
(b) Show that d(t) * d(t) * . ' . * d(t) = d(t).
Suggestion: Letf,(t, E ) = d,(t) * d,(t) * . . . * d,(t) and show that:
1. .f,(t, E ) is a positive pulse which is zero outside the interval
-m < t < nc.
2. The area underf,(t, E ) is equal to one. That is, JyWf,(t, E ) dt = 1 so
that, from the results of Section 3.4, limc+Ofn(t,E ) = d(t).

3-8 +
The unit-step response of an LTI system is s(t) = [I - (t 1)e-']u(t). Use
the results developed in Sections 3.2 and 3.3 to determine the system unit-
impulse response, h(t).
98 PROPERTIES OF LTI SYSTEMS

3-9 The unit-impulse response of an LTI system is


for t < 0
for0 < t < 1
h(t) =
for t > 2
Determine a differential equation relating the system input, x(t), and
corresponding response, y(t).

3-10 The unit-impulse response of an LTI system is h(t) = sin(o,t)u(t). Determine


a differential equation relating the system input, x(t), and corresponding
response, y(t).

3-11 The unit-impulse response of an LTI system is h(t) = sin(oot)r(t/T), where


o o T = 27ck and k is an integer. Determine a differential equation relating the
system input, x(t), and corresponding response, y(t).

3-12 *
Let y l ( t ) = x(t) x(t) and y2(t)= x(-t) * x(t). Determine y l ( t ) and y2(t)for
x(t) = AeP'u(t), where a > 0.

3-13 (a) Determine the unit-impulse response of the LTI feedback system shown
below.

(b) Use the result of part a to determine the system unit-step response, s(t).
(c) Use the result of part b to determine the system response to the input
x(t) = r(t/4).

3-14 Let the unit-impulse response of an LTI system be h(t) = Ar(t) and let the
system input be the triangle, x(t) = (1 - Itl)u(l - It[).Determine the system
response, y(t), by
(a) Convolution.
(b) Differentiating x(t).
(c) Differentiating h(t).

3-15 Let the unit-impulse response of an LTI system be h(t) = Ar(t/2), and let the
system input be the exponential, x(t) = Be-3'u(t). Determine the system
response, y(t), by
(a) Convolution.
(b) Differentiating h(t).
PROBLEMS 99

3-16 Let the unit-impulse response of an LTI system be h(t) = Ar(t/T),and let the
system input be the sinusoid x(t) = B sin(w,t)u(t), where coo = 2n/T. Deter-
mine the system response, y(t), by
(a) Convolution.
(b) Differentiating h(t).

3-17 Two LTI systems, A and B, are connected in tandem to form the system C
shown in the diagram below.

..............................C
................. ......

The unit-impulse response of system A is

h,(t) =
l 1
-1
0
for-l<ttO
for 0 < t < 1
otherwise

and the unit-impulse response of system B is Ab([) = r[(t- 1)/2].


(a) Determine the system response, z(t), to the input x(t) = E .
(b) Determine, h,(t), the unit-impulse response of system C.
(c) Is system C causal even though system A is not causal? Explain.

3-18 Consider two causal LTI systems, A and B, with unit-impulse responses ha([)
and hb(t),respectively. The unit-step responses of systems A and B are sa([)
and s,(t), respectively. It is observed that s,(t) = sb(t)for 0 < t < 1.
(a) Is it necessary that s,(t) = s,(t) for t > l? Explain.
(b) Is it necessary that the responses of systems A and B to the input
x ( t ) = r(t) be equal? Explain.

3-19 Use convolution to show that, for a causal system, y(t) = 0 for t < to if
for t < to.
x(t) = 0

3-20 In this problem, you are asked to prove the inequality, Eq. (3.6-2), used to
prove the necessary and sufficient condition that an LTI. system be stable.
(a) Show that

iff([) 2 0 orf(t) d 0 over the integration interval.


100 PROPERTIES OF LTI SYSTEMS

(b) Show that

iff(t) > 0 over some portion of the integration interval andf(t) < 0 over
the other portion of the integration interval.

3-21 (a) The unit-impulse response of an LTI system is h(t). How is the point
t = 0 determined experimentally?
For each statement given below, state whether it is true or false and give a
short statement of your reasoning.
(b) The unit-impulse response of a stable LTI system can be a periodic
function.
(c) The input of a causal and stable LTI system must be zero for t < 0.
(d) If the input, x(t), of an LTI system is periodic, then depending on h(t), the
output, y(t), may or may not be periodic.

3-22 Consider two tandem connected LTI systems as shown in Fig. 3.1-3.
Use the basic definition of causality to show that the tandem connected
system is causal if systems A and B are causal. Thus show that h(t) = 0
for t < 0 if both h,(t) and hb(t)= 0 for t < 0. From this, we have the
general result that the convolution of two functions that are zero for t < 0
is a function that is zero for t < 0.
Use the basic definition of stability to show that the tandem connected
system is stable if systems A and B are stable. Thus show that the area
under Ih(t)l is finite if the area under Ih,(t)l and Ihb(t)l are finite. From
this we have the general result that if the area under the magnitude of two
functions is finite, then the area under the magnitude their convolution is
also finite.

3-23 For each of the systems with the response y(t) to the input x ( t ) described
below, determine whether it is (1) linear, (2) time-invariant, (3) stable, and (4)
causal.
PROBLEMS 101

3-24 The response of a given system, y(t), for the input, x ( t ) , is

~ ( t=) u + h ( t )+ C X ( ~
- T ) + dx’(t)

Determine the values of the constants, a, 6, c, d, and T for which the system is
(a) Linear.
(b) Time-invariant.
(c) Causal.
(d) Stable.

3-25 Show that an LTI system with the unit-impulse response h(t) = u(t) is not
stable by determining the system response to the input x(t) = u(t).

3-26 Show that the causal LTI system with the unit-impulse response
+
h(t) = (1/1 t)u(t) is not BIBO-stable so that it is necessary but not
sufficient that limt+m h(t) = 0 for an LTI system to be BIBO-stable.

3-27 The response, y(t), for the input, x(t), of an LTI system is

(a) Is the given system causal? Your reasoning must be given.


(b) Is the given system BIBO-stable? Your reasoning must be given.
(c) Determine the unit-impulse response, h(t) of the given system.

3-28 The response of a given system for the input, x(t), is

Determine any constraints on the constants, a, 6, c, and d required for the


system to be
(a) Linear.
(b) Time-invariant.
(c) Causal.
(d) Stable (BIBO).

3-29 The response, y(t), to the input, x(t), of a certain class of systems is
102 PROPERTIES OF LTI SYSTEMS

where P(t) 2 a(t). Determine restrictions, if any, on a(t) and P(t) are required
in order that the system be
(a) BIBO-stable.
(b) Causal.
(c) Time-invariant.
( 4 L'inear.

3-30 The response of a given system to the input x(t) is y(t) = x(2 - t ) . Determine
whether the given system is
(a) Linear.
(b) Time-invariant.
(c) Causal.
(d) BIBO-stable.

3-31 A problem in communication is the determination of a stable system for


which the input, x(t), is a signal that has been corrupted and whose output,
y(t), is the signal in which the corruption has been reduced significantly. The
corruption often could be further reduced if the future of the input were
available. Clearly, this requires a noncausal system that, as discussed in
Section 3.5,is not a physical system. For an LTI system, this would require
that the unit-impulse response, h(t), be nonzero for t < 0.
Let's consider such a stable, noncausal LTI system with the input x(t) and
response y(t). In this problem, we shall show that a delayed version of the
system output, y(t), can be obtained with a maximum error that decreases to
zero with increasing delay. For this, we begin with a stable LTI system with
the input, x(t), and output, y(t), for which h(t) # 0 for t < 0. Because the
system is BIBO-stable, we have that s?",Ih(t)l dt < 00.
(a) Show that, for the input, x(t), the output of a system unit-impulse
response h(t - to) is y(t - to).
+
Now let h(t - to)= h+(t) h-(t), where

h(t - to) for t 20


h+(t) =
for t <0
0 for t 20
h-(t) =
[
h(t - to) for t <0

+
(b) Show that y(t - to) = h+(t) * x(t) h-(t) * x(t) = y+(t)+y-(t), where
y+(t) depends only on present and past values of x(t) and y-(t) depends
only on future values of x(t).
The system with the unit-impulse response h-(t) is not a physical one
and thus cannot be realized. We thus construct a system with the unit-
impulse response h+(t) for which the output isy+(t). The error incurred is
PROBLEMS 103

then y-(t) = y(t - to)- y+(t). A physical input is a bounded waveform


so that we have that Ix(t)l d M .
(c) Show that k-(t)I d MJym Ih-(t)I dt = MSI: Ih(t)l dt.
(d) Show that the magnitude of the error, ly-(t)l, decreases with increasing
delay and that limto,, Iy-(t)I = 0.
We thus have shown that the output of a noncausal but BIBO-stable system
can be obtained with a maximum error that decreases to zero with increasing
delay.
CHAPTER 4

THE FREQUENCY DOMAIN VIEWPOINT

In Section 1.1, we described a system as a many-to-one mapping of inputs, x(t), into


outputs, y(t). In the theory of mappings, the mappings ofjust about any set of objects
are discussed. For example, matrix theory can be viewed as the mapping of vectors 2
into vectors j . All of matrix theory can be discussed in terms of the mapping of
vectors. This view of matrix theory is part of a mathematical subject called linear
vector space theory. In a linear mapping, there are certain objects that are invariant
under the mapping. For example, in the vector view of matrices, there are certain
vectors, 3, of a given matrix which are mapped into the vectors, These are
special vectors that are mapped into vectors parallel to themselves. These special
Aiz.
vectors, 3, are called the characteristic vectors of the mapping, and the constants,
Ai,associated with the characteristic vectors are called characteristic values.' Their
importance derives from the fact that the mapping is linear, so that the mapping
satisfies generalized superposition discussed in Section 2.1. From generalized super-
position, the mapping of a linear combination of vectors is the same linear combina-
tion of the mapping of each individual vector. Thus a vector, 2, that can be expressed
as a linear combination of characteristic vectors:

'Characteristic vectors also are called eigenvectors, and characteristic values are also called eigenvalues.
Eigen means characteristic in German. The two alternate names are thus a mixture of German and English.
English, after all, is rather eclectic, so that it is not uncommon to see words in the English language which
have been borrowed in whole or part from other languages. For clarity, I am using the wholly English form
of the names.

105
106 THE FREQUENCY DOMAIN VIEWPOINT

The mapping of the vector 2 is

i= 1

Thus the characteristic vectors are a natural coordinate system to use for describing
the mapping of vectors by the given matrix.
In general, the objects that are invariant under a given linear mapping are impor-
tant objects to determine and study because they can be used to characterize the
mapping. In the theory of LTI systems, an input function is a characteristic function
if the corresponding output is equal to a constant times the input. The constant is the
characteristic value associated with the characteristic function. We shall study the
characteristic functions and characteristic values of an LTI system and their signifi-
cance in this chapter.

4.1 THE CHARACTERISTIC FUNCTION OF A STABLE LTI SYSTEM

In accordance with our discussion in Section 3.7, the stability condition is required
so that the output for a given input can be discussed without ambiguity. For our
present discussion, please note that we are not requiring the system to be causal. We
shall show that the characteristic function of a stable LTI system is the phasor

x(t) = eJwr (4.1-1)

Note that the phasor is not zero over any time interval. Also, the magnitude of the
phasor is one. This is seen by noting that the phasor is the polar form of a complex
number with the magnitude of one and an angle equal to wt. That the magnitude is
one for all values o f t also can be obtained by expressing the phasor in trigonometric
form as [see Appendix A, Eq. (A-12)]

ejot - cos(wt) + j sin(wt) (4.1-2)

The square of the magnitude is then

so that the magnitude of the phasor is one for all values oft. We thus note that the
phasor is a bounded waveform. This is a good time to study Appendix A for a review
of complex algebra if you are a bit uncertain about it because we shall have continual
need of complex algebra from this point on.
Because the input phasor is a bounded waveform, the corresponding system
output also must be a bounded waveform because we are considering only BIBO-
stable systems. It is important to keep in mind that the only waveforms that can be
4.1 THE CHARACTERISTIC FUNCTION OF A STABLE LTI SYSTEM 107

used physically as a system input are real functions. Because the phasor is a complex
function, it cannot be used physically as a system input. However, we certainly can
consider it theoretically. Later, we shall construct real waveforms as linear combina-
tions of phasors to obtain the system response to real waveforms.
We have shown that generally, for a stable LTI system, the output, y(t), for the
input, x(t), is

y(t) = 1
00

-00
h(z)x(t - z) dz (4.1-4)

We show that the phasor is a characteristic function of a stable LTI system by


substituting Eq. (4.1- 1) to obtain

y(t) = 1
00

-00
h(z)dw('-')dz (4.1-5)

Now, using the identity

(4.1-6)

we obtain
00

(4.1-7)

The phasor, eJCot, is factored out from under the integral because the integration is
over 5, and so it is a constant during the integration. Now the value of the integral is a
complex number that depends on the value of w and the function h(t). We thus
express it as

H(jw)= 1 00

-00
h(z)e-JWT
dz (4.1-8)

The capital H indicates its dependence on h(t). With the use of Eq. (4.1-8), we can
express y(t) as

y(t) = H(jw)ei"" (4.1-9)

We observe that the response of a BIBO-stable LTI system to a phasor is the same
phasor multiplied by the constant, H ( j w ) . Thus the phasor, eJwf,is a characteristic
function of the stable LTI system, and the constant H( J w ) ,is the characteristic value
associated with the characteristic function. In LTI system theory, the characteristic
value, H( j w ) , is called the transfer function for reasons we shall discuss in the next
section.
108 THE FREQUENCY DOMAIN VIEWPOINT

Now, the magnitude of the system response to the input phasor is

(4.1-10 )

In obtaining Eq. (4.1-10) we have used the result developed in Appendix A that the
magnitude of the product of two complex numbers is equal to the product of their
magnitudes and, as shown above, that the magnitude of a phasor is one. As discussed
above, y(t) is a bounded waveform because we are considering only BIBO-stable LTI
systems. Consequently, from Eqs. (4.1-10) and (4.1-8) we have that

(4.1-1 1)

That is, the value of the integral must be finite for any value of o because the system
is BIBO-stable. The necessary and sufficient condition for the LTI system to be
BIBO-stable was shown to be

(4.1- 12)

Thus we note that a sufficient condition for the integral given by Eq. (4.1-8) to
converge (that is, for the value of the integral to be finite) is that the area under the
magnitude of h(t) be finite. Thus, the value of the transfer function, H(jo), of any
BIBO-stable LTI system is not injnite for any value of o.
Before proceeding, let us consider a simple specific example to illustrate the
results obtained. We consider an LTI system with the unit-impulse response

h(t) = Ae-"u(t), a >0 (4.1-13)

This system is stable because

(4. I - 14)

The lower limit of the second integral is zero because, for our example, h(t) = 0 for
t < 0. Also, for t > 0 we have Ih(t)l = IAle-%(t).
The transfer function is obtained by using Eq. (4.1-8). For that integration, the
function to be integrated is
4.2 SINUSOIDAL RESPONSE 109

so that

(4.1 - 16)
Jo
--
A
- , U > O
a+jo

As expected, the transfer function for our example is a complex h c t i o n of w. Its


magnitude and angle are

(4.1-17a)
L H(jo)= L A - tan-
' (3
where, as discussed in Appendix A, we have

0 ifA>O
(4.1- 17b)
.n ifA<O

Note that, as expected, H ( j o ) is not infinite for any value of w. Thus we have for our
example that the response of the given stable LTI system to the input x(t) = eJwtis
y ( t ) = [A/(u+jw)].'"'.

4.2 SINUSOIDAL RESPONSE

We showed in the last section that the response of a BIBO-stable LTI system to the
input x(r) = e'"'' is y ( t ) = H ( jw)ejwt in which the transfer function, H( jo),is given
by Eq. (4.1-8). From generalized superposition, Eq. (2.1-l), the response of a linear
system to a linear combination of functions is the same linear combination of the
system response to each individual function. Consequently, the LTI system response
to the input

(4.2-1)

is

(4.2-2)
110 THE FREQUENCY DOMAIN VIEWPOINT

Thus, once the transfer function is known, we can determine, without convolution,
the response of the stable LTI system2 to any input, which can be expressed as a
linear combination of phasors. This is a principal reason why characteristic functions
are important in the theory of stable LTI systems. That is, if the input can be
expressed as a linear combination of characteristic functions, then the output can
be expressed as the same linear combination of the mapping of each individual
characteristic function (which is just the characteristic function times its character-
istic value). This result leads to a different and important view of LTI systems. Our
previous analysis using convolution considers a system in the time domain. The
analysis we shall develop using the above result considers a system in thefiequency
domain. Note from Eq. (4.2-1) that the analysis we shall develop is restricted to
inputs that can be expressed as the linear combination of phasors. However, even
with this restriction, the frequency domain analysis we shall develop results in an
important and useful view of LTI systems.
We begin our development by determining the response, y(t), of a stable LTI
system to the constant input

x(t) = E (4.2-3)

This equation states that the input is a constant equal to E for --GO < t < 00. Such
an input is often referred to as a dc input3 with a dc value of E. This input can be
expressed in phasor form as E times a phasor with a frequency o = 0. That is

x(t) = E d o (4.2-4)

Consequently, from Eq. (4.2-2), the system response to this input is

y(t) = EH(jO)eiO = EH(0) (4.2-5)

We thus note that the system response also is a constant with the dc value of EH(0).
For this reason, IH(O)( is called the dc gain of the LTI system. Note from Eq. (4.1-8)
that

00

(4.2-6)

The dc gain is seen to be equal to the magnitude of the area under h(t). In conse-
quence, the dc gain is zero if and only if the area under h(t) is zero.

2For conciseness from now on, stable will mean BIBO-stable because that is the type of stability with
which we are mainly concerned in this text.
The notation dc stands for direct current. This is a carryover from an abbreviation used in the early days
of electrical engineering. Today, it is used to refer to any constant waveform, not just direct current.
4.2 SINUSOIDAL RESPONSE 111

We now determine the response, y(t), of a stable LTI system to the sinusoidal
input

~ ( t=) E COS(^^ + 4) (4.2-7)

Note that the input, x(t), is a sinusoid for -00 < t < 00. This input can be
expressed as the linear combination of phasors. From Eq. (A-14) in Appendix A,
we can express x ( t ) as

(4.2-8)

In this equation, x ( t ) is expressed as the linear combination of two phasors. In


E .
relation to Eq. (4.2-l), the sum has two terms in which c, = - e J 4 , c2 = g e - J 4 ,
2 2
w 1 = w, and w2 = -w. Thus, from Eq. (4.2-2) we immediately have

The unit-impulse response, h(t), of a physical system is a real function. For


physical systems then, the output, y(t), must be a real function if the input, x(t), is
a real fhction because the convolution of a real function with a real function must
be a real function. The input in our present example is a real function, and so the
output given by Eq. (4.2-9) must be a real function if h(t) is a real function. This
would be true if, as discussed in Appendix A, the two terms in Eq. (4.2-9) are
conjugates. Observe that the two terms in this equation are conjugates if
H ( - j o ) = H * ( j w ) . We thus expect this to be true if h(t) is a real function. It is
this expectation that motivates our examining whether it is true.
We begin with Eq. ( 4 . 1 4 , the general expression for the transfer function,

(4.2-10)

By replacing w with -w we have

(4.2-1 1)

Now, as discussed in Appendix A, the conjugate of a function is obtained by repla-


cing every j in the expression with -j. Thus the conjugate of Eq. (4.2-10) is

J -00
112 THE FREQUENCY DOMAIN VIEWPOINT

Equations (4.2-1 1) and (4.2-12) are identical if h(t) = h*(t). But, from Appendix A,
this is true if and only if h(t) is real. We thus have shown that H(-jo) = H * ( j o ) if
and onZy if h(t) is real. This is the case for physical systems. It is sometimes
convenient for theoretical reasons to define an LTI system (which is not a physical
system) with a unit-impulse response that is a complex function. All results obtained
for the case in which h(t) is a real function must be reexamined when studying such
theoretically contrived systems.
As expected for physical systems, we have with this result that the two terms in
Eq. (4.2-9) are conjugates, so that from Eq. (A-19) in Appendix A we have

(4.2-13)

This equation can be put in a much better form by expressing the transfer function in
polar form:

H(jo)= IH(jo)ld@' (4.2-14)

In this expression, O(w) = i H(jo).


With this form for the transfer function, we can
express Eq. (4.2-13) as

(4.2-15)

The real part of this expression is easily obtained with the use of Euler's formula [Eq.
(A-12) of Appendix A]. The result is

By use of Eq. (4.2-2) we have been able to obtain the system response to the
sinusoidal input given by Eq. (4.2-7) without convolution.
There are a few important observations to make concerning our result:

1. The first observation is that the response of a stable LTI system to a sinusoid is
a sinusoid. Note from our previous examples in Section 2.5 that this is not
generally true for other waveforms. Also, it is not generally true for nonlinear
or even linear time-varying systems; it is only for LTI systems that it is
generally true.
2. The second observation to make is that the frequency of the output sinusoid is
the same as that of the input sinusoid. This is not generally true for nonlinear
or even linear time-varying systems. It is only for LTI systems that it is
generally true.
4.2 SINUSOIDAL RESPONSE 113

3. The third observation to make is that the ratio of the amplitude of the output
sinusoid to that of the input sinusoid is equal to the magnitude of the transfer
function, IH( jo)l. For this reason, IH( jo)l is called the system gain. The
system gain is a function of the frequency, o.The value that must be used in
Eq. (4.2-16) is the value of the system gain, IH(jo)l, evaluated at the
frequency of the input sinusoid. Note that the output amplitude must be
finite because the system is stable and the input amplitude is fiqite. In
consequence we conclude from Eq. (4.2-16) that IH(jw)l < 00. This is just
another way of obtaining the result given by Eq. (4.1-1 1).
4. The fourth observation to make is that the phase angle of the output sinusoid is
equal to that of the input sinusoid plus e(o) = i H ( j w ) . That is, the input
sinusoid has been phase-shifted by L H( jw). For this reason, L H ( j o ) is
called the system phase-shift. Like the system gain, the system phase shift is a
fimction of the frequency, o.The value that must be used in Eq. (4.2-16) is the
value of the phase shift, Q(o) = L H( jo), evaluated at the frequency of the
input sinusoid.

Note that the same output, y(t), given by Eq. (4.2-16) would be obtained if the
phase shift were increased or decreased by any integer number of 2n radians (or,
equivalently, an integer number of 360"). For example, one cannot differentiate
between a phase shift of 400" and one of 40" or between a phase shift of 300"
and one of -60". For this reason, when reporting the phase shift, an integer number
of 360" (or equivalently, an integer number of 2n radians) often is added or
subtracted from a calculated value of phase shift so that the reported value of
Q ( w ) lies in the range -180" < O(o) 5 180" (or, equivalently, in the range
-n < d(w) 5 n radians).
As an illustration, consider an LTI system with the unit-impulse response
h(t) = AeP'u(t) where a > 0. The gain and the phase shift of this system was
obtained in Section 4.1, Eq. (4.1-17). From that result we have that the system
response to the input

x(t) = Ecos(o0t +40) (4.2-17)

is, with the use of Eq. (4.2-16),

~ ~I ~ ( o O t
y(t) = d 7EIA
a2 + o
+ 4o+ L A - tan-' 9) U
(4.2-18)

Observe for this example that, for a given input amplitude, the higher the frequency
oo,the smaller the amplitude of the output. This is so because the gain for the given
system is a decreasing function of frequency. In consequence, such a system is called
a low-pass filter because low-frequency sinusoids are "passed" with a larger gain
than are high-frequency ones.
114 THE FREQUENCY DOMAIN VIEWPOINT

As a second example, consider an LTI system with the unit-impulse response

h(t) = Ar(;) (4.2-19)

The system is stable because

[AI dt = lAlT < 00 (4.2-20)

The transfer fknction of this system is

A
= -[1 -e- j wT
]

The gain and the phase shift of this system can be obtained easily by first expressing
H ( j o ) in polar form. This can be done by noting that

1 - ,-jQ = [,i(Q/2) - e-J(Q12)


le-
j(Q/2)=
[
2jsm -
’ (31
e-j(W) (4.2-22)

With this result, we can express Eq. (4.2-21) as

(4.2-23)

Thus the system gain is

(4.2-24)

and the system phase shift is

(4.2-25a)
4.2 SINUSOIDAL RESPONSE 115

Because the angle of a positive real number is zero and the angle of a negative real
number is n radians, we have

0 ifA>O
IAT= (4.2-25b)
n ifA<O

and also

0 if
sin (9)
>o
2

n if
sin (F)
wT <o
(4.2-25~)

-
2
Thus, to determine and sketch the gain and the phase shift versus w, we need to
sketch sin(wT/2)/wT/2 versus w. For this, we first sketchf (8) = sin(8)/8 versus 8,
which is shown in Fig. 4.2-1. Note thatf(8) =f(-8). A function for which this is
true is called an even function. Becausef(8) is an even function, we really only need
to determine its graph for 8 3 0 because the graph for negative values of 0 is then
easily obtained by reflecting the curve for positive values of 8 about the ordinate.
The sketch is obtained by first obtaining a simple expression for the curve for small
values of 8. This is obtained by using the power series of sin 8 about 8 = 0, which,
from Appendix A, is

sin8=8--8 1
3!
3 +... (4.2- 26a)

so that the power series expansion off (8) about 8 = 0 is

(4.2-26b)

From this approximation, we observe thatf(0) = 1 and that the graph off(@ about
8 = 0 is a parabola. The rest of the graph off(8) shown in Fig. 4.2-1 is obtained by
noting that

I
1 n
- when sin8 = 1 or 8 =-+2nn
8 2
.f(Q= -!when sin8=-1
371
or 8 = - + 2 n n
(4.2-27)
8 2
0 when sin8 = 0 or 8 = nn

The graph of sin(wT/2)/wT/2 versus w is now easily obtained by noting that it is


the same graph as f ( 8 ) with 8 = wT/2 or w = 28/T. Thus, with the use of
116 THE FREQUENCY DOMAIN VIEWPOINT

Eq. (4.2-27),sin(oT/2)/oT/2 = 0 for o = 2(nn)/T rad/s. Because o = 2nf, this


can be expressed in terms off as 27cf = 2nn/T orf = n/T Hz. Thus, H ( jo)= 0 at
the frequencies f = n/T Hz for our example. This means that the gain is zero at
those frequencies. To physically understand this result, note from the convolution
integral that the output for this system is

I-,
00

y(t) = x(z)h(t - Z) dz
(4.2-28)

The output, y(t), is thus A times the area under the last T seconds of x(t). Now,
the gain and the phase shift is defined only for a sinusoidal input as given by Eq.
(4.2-17). For this input, the system response for our example is

If the frequency of the input sinusoid is f = n/T Hz, then from our discussion of
sinusoids in Section 1.4, Eq. (1.4-14), the output, y(t), is the area under the last n
cycles of the input sinusoid. This area is zero because the area under each sinusoidal
cycle is zero. Thusy(t) = 0 for f = n/T Hz. From Eq. (4.2-16), this means that the
gain is zero at these frequencies. Remember that the gain and the phase shift are
defined only for a sinusoidal input.
The LTI system response to any input that can be expressed as the sum of
sinusoids now can be immediately determined with the use of generalized super-

1
0.9
0.0
0.7
0.6

0.2
0.1
0
-0.1
-0.2
-0.3I
-5 -4 -3 -2 -1 0 1 2 ' 3 4 5
-e
2n

Fig. 4.2-1 Graph of f ( e ) =


sin(@) e
versus -.
0
~

271
4.2 SINUSOIDAL RESPONSE 117

position and our present result, Eq. (4.2-16). The stable LTI system response to the
input

(4.2-30)

is

The transfer fimction thus allows us to determine, without convolution, the response
of any stable LTI system to any input that can be expressed as the sum of phasors,
Eq. (4.2-l), or, equivalently, as the sum of sinusoids, Eq. (4.2-30).
The input, x(t), given by Eq. (4.2-30) is for -00 < t < 00. If, for example, the
input were zero for t < 0, then the system response would not be given by Eq.
(4.2-31). However, as t + 00, the system output of any stable LTI system will
tend to the output given by Eq. (4.2-31). Consequently, it is called the steady-
state response.
This last result can be shown using our discussion at the end of Section 3.5. For
the example given there it is shown that, as an input value recedes into the past, its
influence on the system output decreases exponentially because the impulse
response, h(t),of the given system decays exponentially. Consequently, in the convo-
lution integral given by Eq. (3.5-8), the influence of the input values for t 5 0
decreases exponentially as to increases. This means that the output of the given
system will tend to the same waveform as to increases irrespective of the input
waveform for t < 0. Thus, for an input given by

the output will tend to that given by Eq. (4.2-3 1) as t + 00. The output given by Eq.
(4.2-3 1) is thus called the steady-state system response because it is the response to
which the given system output tends as t + 00. The difference between the actual
response and the steady-state response is called the transient response. Note that the
transient response tends to zero as t + 00.
We now can generalize our discussion above to any stable system. The necessary
and sufficient condition for stability of an LTI system is given by Eq. (3.6-1). The
requirement, as discussed in Section 3.6, is that the area under Ih(t)l be finite. Note
that this requires that h(t) + 0 as t + 00 because, if not, then the area under Ih(t)l
would be infinite. The system discussed above is a special case of a stable LTI
system because its impulse response decreases to zero exponentially as t increases.
In terms of our discussion in Section 3.5, we thus observe that any stable LTI system
has a memory that decays as time increases so that as an input value recedes into the
past, its influence on the output decreases. Thus, as time increases, the influence of
the input values for t 5 0 decreases so that, for an input given by Eq. (4.2-32) above,
118 THE FREQUENCY DOMAIN VIEWPOINT

the output will tend to the steady-state response given by Eq. (4.2-31). The rate of
this approach, which is the same as the rate at which the transient tends to zero,
depends on the rate at which Ih(t)l approaches zero as t increases; equivalently it
depends on the rate at which the system memory decays as time increases.
These results lead us to the question as to which input waveforms can be
expressed as the sum of phasors, Eq. (4.2-l), or, equivalently, as the sum of sinu-
soids, Eq. (4.2-30), and if so, how to determine the expression. The representation
of a waveform as the linear combination of phasors is called Fourier analysis. Our
results in this section is the reason Fourier analysis is important in the theory of
stable LTI systems. It is a consequence of the result that the phasor is a characteristic
function of a stable LTI system. If some waveform other than the phasor were a
characteristic function of an LTI system, we would be interested in expressing the
input as a linear combination of the other waveform. It is important to appreciate this
because the phasor is not a characteristic function of, for example, linear time-
varying (LTV) systems. Consequently, Fourier analysis is of limited value in the
study of LTV systems.
Before considering Fourier analysis, we shall examine the gain and the phase shift
of tandem connected systems. The results we shall obtain are basic because they
form the basis for our analysis of LTI systems in the frequency domain. An inter-
esting observation to make as you study the following sections is that the results
obtained can be viewed as mathematical results without any regard to their system
theory source. These results are the fimdamental ones required for the development
of Fourier theory, which we shall discuss in the next chapter.

4.3 TANDEM-CONNECTED LTI SYSTEMS

In Section 3.1, the tandem connection of two LTI systems as shown in Fig. 4.3-1 was
studied in the time domain. We showed there that the tandem connection of two LTI
systems, systems A and B, is an LTI system. Consequently, it was shown that the
tandem-connected system can be represented in terms of an impulse response, h(t),
given by Eq. (3.1-5). Using the notation established in that section, we showed

(4.3-1)

Two other properties of the tandem-connected system of importance are causality


and stability.
First, it is clear that if systems A and B are causal systems, then the tandem
connected system is a causal system. This result follows from the definition of
causality. At any time, the output z(t) does not depend on any future values of
yu(t) because system B is causal, and also y,(t) does not depend on any future
values of x(t) because system A is causal. Consequently, the tandem system is
causal because, at any time, the output z(t) does not depend on any future values
of x(t). Note that the type of system was not used in this argument, so that this result
4.3 TANDEM-CONNECTED LTI SYSTEMS 119

.
........................................................................
...----
Fig. 4.3-1 Two LTI systems connected in tandem.

is true for the tandem connection of any two systems whether linear or nonlinear,
time-invariant or time-varying.
We showed in Section 3.5 that an LTI system with a unit-impulse response h(t) is
causal if and only if h(t) = 0 for t < 0. We thus conclude the following
If
h,(t) = 0 fort < 0 and hb(t) = 0 fort < 0

then
h(t) = h,(t) * hb(t) = 0 for t < 0
Note that this result can be viewed purely as a mathematical result without any
regard to system theory. That is, if two functions that are zero for t < 0 are
convolved, then the result is a function that is zero for t < 0. The obtaining of
mathematical results from system theory results is an important and useful technique
that we shall use.
For the second property, we show that if systems a and b are stable systems, then
the tandem connected system is a stable ~ y s t e m .This
~ result follows from the
definition of BIBO- stability. If x(t) is a bounded waveform, then y,(t) is a bounded
waveform because system A is a stable system. Ify,(t) is a bounded waveform, then
z(t) is a bounded waveform because system B is a stable system. Thus the tandem
system is BIBO-stable because every bounded input, x(t), produces a bounded
output, z(t).Thus, if systems A and B are stable systems, then the tandem connected
system is a stable system. Again note that the type of system was not used in this
argument, so that this result is true for the tandem connection of any two systems
whether linear or nonlinear, time-invariant or time-varying.
We showed in Section 3.6 that an LTI system with the unit-impulse response h(t)
is BIBO-stable if and only if h(t) is absolutely integrable; that is, if and only if

(4.3-2)

A function that is absolutely integrable is said to be an L , fun~ tion.Using


~ this
terminology, our stability result can be stated as follows:

Remember that, for conciseness, we are using “stability” to refer to “BIBO-stability” because that is the
type of stability with which we are mainly concerned in this text.
The L is used to honor Henri Leon Lebesgue (1875-1941), who made major contributions to the theory
of integration. The sub-1 denotes that it is the integral of the first power of the magnitude of the function
that is finite.
120 THE FREQUENCY DOMAIN VIEWPOINT

If

h,(t) is an L , function and hb(t) is an L , function

then

h(t) = h,(t)*hb(t) is an L , function

Note that we have not assumed either system A or system B to be causal in obtaining
this result. Thus this result is valid even if the component systems are not causal.
Again, note that this result can be viewed purely as a mathematical result without
any regard to system theory. That is, if two L , functions are convolved, then the
result is a L , function. This is another example of obtaining a mathematical result
from a system theory result.
We now determine the transfer function of a tandem-connected system. Because,
from Eq. (3.1-5), h(t) = h,(t)*hb(t), we should be able to express H ( j w ) in terms of
H,(jw) and H b ( j 0 ) . This could be done by substituting the convolution integral,
Eq. (3.1-5), into the equation for the transfer function, Eq. (4.1-8), and manipulating
the resulting equation to obtain the desired result. However, a simpler and more
insightful method is to use the fact that the phasor is a characteristic function of a
stable LTI system.
The method uses the result expressed by Eq. (4.1-9), which is that
e@' + H ( jw)ej"' for a stable LTI system. For this we consider the tandem connec-
tion shown in Fig. 4.3-1 in which systems A and B are stable. The tandem-connected
system then is stable in accordance with our result above. Now let x ( t ) = eJwt.The
response of the tandem-connected system to the input phasor is z(t) = H(jw)ej"'.
Another expression for z(t) is obtained by following the phasor through the tandem
connection. For this we have that the response of system A to the input phasor is
y,(t) = H,(jo)ej"'. Thus y,(t) is equal to a constant, H,(jw), times the phasor, eJwt.
Hence using the homogeneous property of linear systems, Eq. (2. 1-3), the response
of system B is z(t) = H u ( j w ) H b ( j w ) e j WFrom
'. above, z(t) = H(jw)ejw'. Because
the two expressions for z(t) must be equal for all t , we immediately have the result

Note that we have not assumed either system A or system B to be causal in obtaining
this result. Thus this result is valid even if the component systems are stable but not
causal.
This result also can be viewed purely as a mathematical result without any regard
to system theory. That is, if h,(t) and h b ( t ) are L , functions, then h(t) = h,(t)*hb(t)
is an L , function in accordance with our previous result and also
4.3 TANDEM-CONNECTED LTI SYSTEMS 121

The gain of the tandem-connected system is thus the product of the gains of the
two-component system because, from Eq. (4.3-3) and with the use of Eq. (A-26) in
Appendix A, we obtain

Also from Eq. (4.3-3), the phase shift of the tandem-connected system is the sum of
the phase shifts of the two-component systems because, with the use of Eq. (A-27)
in Appendix A, we have

The last two relations also can be obtained from physical considerations. For this,
let the input, x ( t ) , be a sinusoid with a magnitude /El and a frequency o,rad/s.
Because the gain of system A is IH,(jwo)l, the output of system A , y,(t), is a
sinusoid with a magnitude IEIIH,(joo)l. The sinusoid vu([)is the input of system
B. Because the gain of system B is IHb(joo)l, the output of system B, z(t), is a
sinusoid with a magnitude IEIIH,(jwo)llHb(joo)l. Thus we see that the gain of the
tandem-connected system is that given by Eq. (4.3-5). The phase shift also can be
obtained from physical considerations. System A shifts the input sinusoid by an
amount equal to L H,(joo), and system B shifts the sinusoid y,(t) by an amount
equal to L Hb( jo,).Consequently, the difference in phase between the output sinu-
soid, z(t), and the input sinusoid, x(t), is the sum of the two phase shifts as given by
Eq. (4.3-6).
If a third stable LTI system with the transfer function H,( jo)were connected in
tandem with the other two stable LTI systems, it is clear from our previous work that
the tanden-connected system is a stable LTI system with the transfer fknction

Clearly, if several stable LTI systems are connected in tandem, the transfer function
of the tandem-connected system is simply the product of the individual transfer
hnctions.
To illustrate the results we've obtained, consider the system formed by connecting
P identical stable LTI systems in tandem, each with the unit-impulse response:

h,(t) = Ae?'u(t) with a > 0 (4.3-8)

From the example in Section 4.1, Eq. (4.1- 16), we have

A
=-
Hu(jo) (4.3 -9)
a +jo
122 THE FREQUENCY DOMAIN VIEWPOINT

Because each component system is stable, the tandem-connected system is stable


with the transfer function

(4.3-10)

Thus the gain of the tandem-connected system is

(4.3-1 1)

and its phase shift is

L H ( jo)= P [ L A - tan-'
%I
U
(4.3-12a)

where

LA= { 0 ifA>O
71 i f A < O
(4.3-12b)

Thus, if the input, x(t), is the sinusoid,

x(t) = E cos(wot + 4) (4.3- 13)

The response of the P tandem-connected systems is

(4.3-13a)

With the use of Eq. (4.3-12b), an equivalent expression is

(4.3-13b)

This is a low-pass filter because the higher the sinusoidal frequency, the smaller the
output amplitude. The bandwidth of such a filter is usually defined as the frequency
at which the gain has decreased by a factor 4 of its maximum. For our filter, the
maximum gain is at coo = 0. At this frequency, the gain is IAIp/d'. Thus the
bandwidth is the frequency at which the gain is IAIP/&aP. To determine this
frequency, w, we must determine the frequency oo= w at which

(4.3-15)
4.4 CONTINUOUS FREQUENCY REPRESENTATION OF A WAVEFORM 123

This requires

(a2 + W2)P’2 = Jilf (4.3-16)

Raising both sides of the equation to the power 2/P and solving for w, we obtain

w = a m (4.3-17)

Notice that the larger the number of identical systems which are connected in
tandem, the smaller the bandwidth of the tandem-connected system. The bandwidth
for a few values of P are:

P: 1 2 3 4
w: a 0 . 6 4 ~ 0 . 5 1 ~ 0 . 4 3 ~

Note that with two identical systems in tandem, the bandwidth has been reduced to
64% of the bandwidth of a single system. The bandwidth with three identical
systems in tandem is only about half of the bandwidth obtained with a single system.
Of course, the gain of P identical systems in tandem is the gain of a single system
raised to the power P so that, in design, one can trade bandwidth for gain in this
manner. In our discussion of filters later in this text, we shall examine methods of
improving the filter characteristics by not using identical systems in tandem.

4.4 CONTINUOUS FREQUENCY REPRESENTATION OF A


WAVEFORM

We showed in Section 4.2 that if the input, x ( t ) , of BIBO-stable LTI system can be
expressed as the linear combination of phasors,

(4.4-1)

then the output, y(t), can be expressed as the same linear combination of the system
response to each individual phasor:

(4.4-2)

As long as the sum in Eq. (4.4.-1) converges, the number of frequencies in the sum
can be arbitrarily large. In fact, instead of summing over a discrete set of frequencies,
the sum can be extended over all frequencies from -ato +co. Now, an integral can
124 THE FREQUENCY DOMAIN VIEWPOINT

be viewed simply as a sum over a continuous variable so that we also can express the
input given by Eq. (4.4-1) as an integral over all frequencies:

x(t) = 103

-03
C ( o ) d w fd o (4.4-3)

for which the response of the BIBO-stable LTI system is

y(t) = 103

-03
C(o)H(jo)ejufd o (4.4-4)

In the representation of the input given by Eq. (4.4-3), the phasor amplitudes are
zero but they have an amplitude density. To understand this concept better, first
consider this textbook. It has a certain mass. However, there is zero mass at any
point within the book because the volume of a point is zero. If the mass at any point
within this book is zero, how then can this book have nonzero mass? For a descrip-
tion of the mass of this text, it is not fruitful to discuss the mass at a point because it
always will be zero. Rather we define a mass density at a given point. This is
obtained by determining the mass in a small sphere centered on the given point.
We then determine the ratio of the mass within the sphere divided by the volume of
the sphere. This ratio tends to some limiting value as the sphere's radius goes to zero.
The limiting value of the ratio is called the muss density at that point. The mass
density generally will vary from point to point. With this definition, the mass of this
book is just the integral of its mass density over all points within the book.
Let us now consider an integral of the form

I =
1: f ( t ) dt (4.4-5)

Note that the area under any point off(t) is zero. Thus, how can there be area under
the curve if the area under every point is zero? Similar to our discussion in Section
2.3, this problem is circumvented by first forming narrow rectangles as shown in Fig.
4.4-1. The sum of the areas of the rectangles is an approximation of the value of the
integral, I . This sum tends to some limiting value as the widths of the rectangles go
to zero. The value of the integral, I , is this limiting value.6 Now consider the shaded
rectangle with its center at t = z shown in Fig. 4.4- 1. Let the width of the rectangle
be dt, an infinitesimal. The area is then du, also an infinitesimal. From the figure, the
infinitesimal area is du = f ( z ) dt. Thus, we can expressf(z) asf(z) = du/dt. Note
that this is the area density at t = z. That is, from the point of view of Riemann
integration, the graph off(t) versus t is a graph of its area density versus t ! From this

Integration by this procedure is called Riemann integration in honor of the German mathematician Georg
Friedrich Bemhard Riemann (1826-1 866), who made major contributions to many areas of mathematics,
including the theory of integration.
4.4 CONTINUOUS FREQUENCY REPRESENTATION OF A WAVEFORM 125

Fig. 4.4-1 Illustrating the evaluation of an integral.

viewpoint, note that the impulse, d(t - z), which has unit area in the infinitesimal
interval about the point t = z, plays the same role as the unit point mass in
mechanics or the unit point charge in electromagnetics.
With this understanding of integration, we can view the representation of x ( t ) in
Eq. (4.4-3) as the sum of phasors with an amplitude density of C(o).To obtain a
representation in the form of Eq. (4.4-3) as a linear combination of phasors with
nonzero amplitude as in Eq. (4.4-l), C(w) must contain impulses. For example,
consider the case for which C(o)is

C ( 0 )= Cnd(W - on) (4.4-6)


n

Substituting in Eq. (4.4-3), we then have

~ ( t=)
n
C,, 1
00

--oo
d ( o - on)dW'
do
(4.4-7)
= cnejwnt
n

The value of the integral is obtained using the sifting property of the impulse derived
in Section 2.4. Equation (4.4-7) is exactly Eq. (4.4-l), which is the linear combina-
tion of phasors with frequencies onand nonzero amplitudes C,. Thus we note that if
x ( t ) can be represented in the form given by Eq. (4.4-3), then the response, y(t), of
the BIBO-stable LTI system is given by Eq. (4.4-4).
The transfer function, H ( jo),is obtained from the unit-impulse response, h(t), by
means of Eq. (4.1-8). But can the unit-impulse response be obtained from knowl-
edge of the transfer function? This is a very important question. We have discussed
the fact that h(t) completely characterizes the input-output mapping of a given stable
LTI system because once it is known, the output, y(t), due to any input, x(t), can be
determined by convolving x ( t ) with h(t). If h(t) could be determined from H ( j w ) ,
then H(jo)would also completely characterize a stable LTI system because, once
known, h(t) could be determined. Now the system gain is equal to the magnitude of
H ( j o ) and the phase shift is equal to the angle of H ( j w ) so that once the gain and
the phase-shift are known, Eq. (4.2-14) can be used to determineH(jw). This means
that the gain and the phase shift would also completely characterize a stable LTI
system.
From a mapping viewpoint, we can view Eq. (4.1-8) as a mapping of h(t) into
H(jo).The obtaining of h(t) from H ( j w ) is then the inverse mapping. As we
126 THE FREQUENCY DOMAIN VIEWPOINT

discussed in Section 1.1, the inverse of a mapping exists if and only if the mapping is
one-to-one. This means that if h(t) can be determined from H ( jo),then the mapping
is one-to-one so that no two different unit-impulse responses can have the same
transfer function. Consequently, the system gain and phase shift would uniquely
determine the system unit-impulse response. Note that this would imply that we
cannot arbitrarily specify a desired system gain and phase shift of a causal LTI
system because the corresponding system unit-impulse response may not be zero
for t < 0. Before examining this and other consequences, we shall show that the
mapping is indeed one-to-one by showing that h(t) can be determined from H( j w ) .
The results we shall derive are often obtained by reference to mathematics texts
because they are part of the mathematical theory of Fourier transforms. However, to
gain a better appreciation of system concepts, I have developed the following deri-
vation using only the system theory results we have obtained.
Equations (4.4-3) and (4.4-4) now will be used not only to show that h(t) can be
determined from H ( jo), but also to determine an equation by which it can be
determined. For this, consider the case for which

Then

To evaluate this integral, we first complete the square to express the parenthetical
expression in the exponent in the form

(4.4- 10)

By substituting this expression in the integral, we have

(4.4-1 1)

The integral in Eq. (4-4- 11) is now in a standard form that is listed in many tables of
integrals. Its value is ~ , / Z / EWith
. this value of the integral, we then have

(4.4-12)
4.4 CONTINUOUS FREQUENCY REPRESENTATION OF A WAVEFORM 127

Observe from Eq. (3.3-1 1) that this is h,(t). Thus we have shown that if

1
C(o)= -e-(c2'4)w2 then x ( t ) = h,(t) (4.4-13)
271

We also showed in Section 3.3 that if the input is x(t) = S,(t), then the LTI system
response is y ( t ) = h,(t). Thus we have from Eq. (4.4-4) that

(4.4-14)

We now make use of Eq. (3.3-l), which states that h(t) = lim,+o h,(t). For this, first
note that lims+o e-(E2/4)w2= 1 for all values of w . Consequently, by taking the limit
as E --f 0 in Eq. (4.4-14), we obtain'

Thus we have shown that the unit impulse response, h(t), of a BIBO-stable LTI
system can be determined from its transfer function by use of Eq. (4.4- 15). Accord-
ingly, as we discussed above, no two different stable LTI systems can have the same
transfer function because the mapping is one-to-one. The transfer function thus
completely characterizes the input-output mapping of a stable LTI system because,
once known, the system output for any input can be determined. One way this can be
done is to first use Eq. (4.4- 15) to determine h(t) and then convolve the system input
with h(t) to determine the system response. Later, we shall discuss other methods.
Because the system gain and phase shift are the magnitude and angle of the
transfer function, the gain and phase shift also completely characterized the input-
output mapping of a stable LTI system. The gain and phase shift can easily be
determined experimentally by using a sinusoid as the input of the stable LTI
system. As we discussed, the output is a sinusoid with the same frequency as the
input sinusoid. The ratio of the amplitude of the output sinusoid to that of the input
sinusoid is the system gain at the frequency of the sinusod, and the phase difference
between the output and input sinusoids is the system phase shift at the frequency of
the sinusoid. The gain and the phase shift can be measured and plotted as a function
of the sinusoidal frequency.

'Note from Eq. (4.4-14) that we should integrate first with respect to w and then take the limit as E goes to
zero. By writing Eq. (4.4-15), we have really taken the limit as E goes to zero before integrating with
respect to w . Mathematically, this requires that H ( j w ) go to zero as o becomes arbitrarily large. Now, the
faster a waveform wiggles, the higher the frequency content of the waveform. Thus, in order that H ( j w )
go to zero as w becomes arbitrarily large, we require that h(t) not be infinitely wiggly in any interval. Such
a iknction is called by mathematicians a function of bounded variation, so that the requirement is that h(t)
be a function of bounded variation. Physically, this means that we require that the system gain must
approach zero as w tends toward infinity This condition is satisfied by all physical systems.
128 THE FREQUENCY DOMAIN VIEWPOINT

The result we have just obtained can be viewed purely as a mathematical result
without any regard to its system theory source. For the statement of the mathematical
result, I'll call the h c t i onf(t) instead of h(t). The mathematical statement that we
have obtained in this section is as follows:
Iff(t) is an L, function of bounded variation so that

(4.4-16)

then the integral

(4.4-1 7)

converges for all values of o andf(t) can be retrieved from F( j o ) by evaluating the
integral

f ( t )=
271
1
-bo

--bo
F( jo)d"' d o (4.4-1 8 )

In mathematics, the function, F( jo), is called the Fourier transform off(t).

PROBLEMS

4-1 Let

F( j o ) =
3 +jo
3 3 9
Determine IF( jo)l and i F(jo)for w = 0, - - -, and 3.
4' 2' 4
4-2 Let

Determine the magnitude and angle of F( j o ) for o = 0, 1, 2, 3, and 4.


PROBLEMS 129

4-3 Determine the magnitude and angle of

for w = 0, 1, 2, 3, and 4.

4-4 The unit-impulse response of a given LTI system is h(t) = r(t/3). Determine
the system transfer function, H ( j w ) , and determine the system response, y(t),
to the input

x(t)
K 3
=Acos - t + - +Bsm 2nt+-
. [ 3
4-5 The unit-impulse response of a given LTI system is h(t) = d ( t ) - 2eP2'u(t).
Determine the system transfer function, H ( j w ) , and determine the system
+
response to the input x ( t ) = A Bcos(2t).

4-6 The unit impulse response of a given LTI system is h(t) = [4eC2' - 2e-']u(t).
Determine the system transfer function, H ( j w ) . What is an approximate
expression for the gain and phase shift for w >> 2?
2
4-7 The transfer function of a given LTI system is H ( j w ) = -
(a) Determine the system gain.
3 +jw'
(b) Determine the system phase shift.
+ +
(c) Determine the system response to x ( t ) = A B sin(3t n/4).

4-8 The input of stable LTI system is x ( t ) = A +Bcos(t+ n/3) +


Csin(3t - n/6). Determine the response, y(t), for each stable LTI system
with the transfer function given below.
(a) H J j w ) =jto/(l + j w )
(b) Hb( j w ) = ( j w - I)/( 1 +jw)(3 +io)
+
(c) H,( j w ) = (jo 3)/(9 - o2+j20)
(d) [j o / ( l + j ~ ) l e - ~ ~ "

4-9 The response of a stable LTI system is

for the input

x(t) = 2 + 3 cos(4nt) + 4 sin(5nt)


130 THE FREQUENCY DOMAIN VIEWPOINT

(a) Determine oo.


(b) For what values of o can H(jo)be determined?
(c) Determine H ( j w ) at the frequencies determined in part b.

4-10 The response of a given LTI system to the input x ( t ) is y(t) = Ax(t - to).
Determine the system gain and phase shift.

4-11 The input of stable LTI system is

x ( t )=A+Bc os t - -
( t) +Csm 3t+-
. ( 3
Determine the response, y(t), of the LTI system with the unit-impulse
response

h,(t) = [e-' - e-3']u(t)

4-12 The unit-impulse response of an LTI system is h(t) = e-" cos(oot 4)u(t), +
where CI > 0. Determine the system transfer hnction H(jw). Hint: Use
Eq. (A-14) to express h(t) as the sum of exponentials so that you can
evaluate the integral. Two special cases are h,(t) = ePatu(t) and hb(t) =
e? sin(w,t)u(t). Does your result agree with these two special cases?

4-13 Use Eq. (4.2-16) to obtain the result given by Eq. (4.2-5).

4-14 (a) Use convolution to show that the response of a stable LTI system to the
input x(t) = E is y(t) = EH(O), where H(0) is equal to the area under the
system unit-impulse response, h(t).
(b) Show that this same result is obtained from Eq. (4.2-16).

4-15 The unit-impulse response of an LTI system is h(t) = [e-,' Be-2u']u(t), +


where a > 0. Determine the value of B required for the following:
(a) The system response to the dc input x ( t ) = E is zero.
(b) The system response to the dc input x ( t ) = E is E.

4-16 Each component system in the diagram below is a stable LTI system.

0
....... ........................................hi.........................................................,
j_.
PROBLEMS 131

(a) Show that overall system with the unit impulse h(t) is stable.
(b) Determine h(t) in terms of the unit-impulse responses of the component
systems.
(c) Determine the transfer function of the overall system in terms of the
transfer functions of the component systems. For this determination, use
the technique used to obtain Eq. (4.3-3).

4-17 Show that Eqs. (4.3-13a) and (4.3-13b) are equivalent expressions for y(t).

4-18 The transfer function of a given stable LTI system is

3a2
H(jw)=
(a +jw)(3a + j w )

where a > 0. Determine the bandwidth of this low-pass filter.

4-19 A low-pass stable LTI system with the transfer function H,(jo)= a / ( a + j w )
is connected in tandem with a low-pass LTI system with the transfer function
H ( j w ) = b/(b + j w ) . As discussed in Section 4.3, the bandwidth of the
tandem connection is 0 . 6 4 ~if b = a. What positive value of b is required for
the bandwidth of the tandem connection to be 0.8a?

4-20 To obtain Eq. (4.4-15), we required that limw+coH ( j w ) = 0. In this problem,


we obtain a sufficient condition for this to be true. We show that if (1)
limf+*OO h(t) = 0 and also (2) the derivative of h(t), h'(t), exists for which
JTOO Ih'(t)l dt = M < 00, then limw+OO H ( j w ) = 0. In mathematics, this
result is known as the Riemann-Lebesgue Lemma.
To show this, begin with the expression for the transfer function,

H ( j w )= 1
00

-00
h(t)e-'"' dt.

(a) Integrate this expression by parts to obtain

(b) Thus show that if the two conditions stated in the problem are satisfied,
then limw+ooH ( j w ) = 0.

4-21 The Fourier transform off(t) is F ( j w ) = Ar(lwl/W)e-iWT.Determinef(t).

4-22 Use Eq. (4.4-18) to determine the time function,f(t), whose Fourier trans-
form is F( j w ) = e + .
CHAPTER 5

THE FOURIER TRANSFORM

It was shown in Chapter 4 that the phasor is a characteristic function of a stable


linear time-invariant (LTI) system. With the use of superposition, we are able to
analyze and interpret the response of a stable LTI system to a large class of wave-
forms in terms of its phasor response. System analysis in terms of the system phasor
response is referred to as system analysis in the frequency domain. This analysis led
us to the question as to which waveforms can be expressed as a linear combination of
phasors and, if so, how to determine the expression. As mentioned in Section 4.2, the
study of waveforms as the sum of phasors is called Fourier analysis.' However,
before beginning a development of Fourier analysis, we examined the tandem
connection of stable LTI systems in the frequency domain. From this study in
Sections 4.3 and 4.4, a number of results of importance in system theory were
obtained. We also were able to express some of the results obtained in those sections
as a mathematical result without any regard to their source in system theory. In this
chapter we shall use those mathematical results to develop Fourier theory.
Fourier theory plays an important role in many branches of science, mainly
because models of many physical processes are stable LTI systems. For example,
Fourier theory plays an extensive role in areas such as optics, acoustics, circuits,
electronics, and electromagnetics. In each of these areas, Fourier analysis is often
given a definite physical interpretation of significance for the area being studied
instead of treating it simply as a mathematical technique. However, the mathematical
study of Fourier theory has also led to important results in many mathematical areas
such as the theory of random processes, prime numbers, and convergence.2

'Joseph Fourier (1768-1830) was a French mathematician and physicist.


'A major contributor to all these applications of Fourier analysis was the American mathematician
Norbert Wiener (18961964).

133
134 THE FOURIER TRANSFORM

Later in this text we shall develop and discuss the bilateral Laplace transform in
some detail. The Fourier transform will be shown to be a special case of the bilateral
Laplace transform. Thus, in this chapter we shall only develop and discuss some
aspects of Fourier transform theory that will help enhance our understanding of the
frequency domain view of LTI systems.

5.1 THE FOURIER TRANSFORM

At the end of Section 4.4, we showed that if f ( t ) is an L , function of bounded


variation so that

(5.1-1)

then the integral

F(jw)= 1 03

-00
f(t)e-jw' dt (5.1-2)

converges for all values of w and alsof(t) can be retrieved from F ( j w ) by evaluating
the integral

f ( t )=-
2n
1--
l o o
F(jw)ejw' do (5.1-3)

The function F ( j o ) is called the Fourier transform of f ( t ) . We also shall have


occasion to use the notation g { f ( t ) )to indicate the Fourier transform off(t); that
is, %( f ( t ) ) = F( j w ) . Also,f(t) is said to be the inverse Fourier transform of F( j w ) .
In terms of our present nomenclature, note that the transfer function, H ( j w ) , of a
stable LTI system is the Fourier transform of h(t), the system unit-impulse response.
Equations (5.1-2) and (5.1-3) are called a Fourier transform pair because if one is
true, then so is the other. For example, if a function F,(jw) is obtained by using an
L , functionf,(t) in Eq. (5.1-2), then the result obtained using F,(jw) in Eq. (5.1-3)
will be the same function,f,(t). For example, the transfer function, H ( j w ) , given by
Eq. (4.1-16) is observed to be the Fourier transform of unit-impulse response, h(t),
as given by Eq. (4.1-13). Consequently, from Eq. (5.1-3) or, equivalently, Eq.
(4.4-15), we have that for a > 0

(5.1-4)

Admittedly, this integral is not easy to evaluate without the use of a special integra-
tion technique called contour integration. However, we need not evaluate this inte-
gral because the fact that Eqs. (5.1-2) and (5.1-3) are a Fourier transform pair means
5.2 AN EXAMPLE OF A FOURIER TRANSFORM CALCULATION 135

that we know that the value of the integral is as given. Because of the difficulty often
incurred in evaluating the integral in Eq. (5.1-3), tables of Fourier transforms have
been developed by calculating F( jo)using. Eq. (5.1-2) and knowing that the value
of the integral in Eq. (5.1-3) will be the same function, f(t).
Note that the requirement thatf(t) be an L , function is not a necessary condition
but only a sufficient condition. We only showed in Section 4.4 that iff ( t ) is an L ,
function of bounded variation, then Eqs. (5.1-2) and (5.1-3) definitely are a Fourier
transform pair. However, there are some functions of bounded variation which are
not L , functions for which Eqs. (5.1-2) and (5.1-3) are a Fourier transform pair.

5.2 AN EXAMPLE OF A FOURIER TRANSFORM CALCULATION

To illustrate our discussion in the previous section and to present some basic aspects
of the Fourier transform, consider the case for which f(t) is the rectangle:

E if - T < t t T
(5.2-1)
0 otherwise

In this equation, r(.) is the rectangular function defined by Eq. (1.5-4). Using Eq.
(5.1-2), the Fourier transform of this function is

t+T -.
F( j w ) = dt
-02

(5.2-2)

2j sin(oT) 2 sin(wT)
=E =E
.io w

By multiplying both the numerator and denominator by T, Eq. (5.2-2) can be


expressed in the form

sin(wT)
F( jo)= 2TE ~ (5.2-3)
0T

The equation for F(jo) is similar to that of the transfer function determined in
Section 4.2, Eq. (4.2-23), where a graph of (sin8)/8 is given and discussed.
Observe from Eq. (5.1-2) that the value of the Fourier transform at o = 0 is

F(0) = f(t) dt (5.2-4)


-02
136 THE FOURIER TRANSFORM

In words, the value of F(0) is equal to the area underf(t). This usually is an easy
check on your calculation of a Fourier transform. That is, your expression for F( jo)
is wrong if Eq. (5.2-4) is not satisfied. Unfortunately, it is possible that your expres-
sion for F(jo)is incorrect and yet Eq. (5.2-4) is satisfied so that you only can say
that your expression is wrong if Eq. (5.2-4) is not satisfied. For our case from Eq.
(5.2-3), we have F ( 0 ) = 2TE, which is equal to the area of the rectangle,f(t), given
by Eq. (5.2-1).
Note for our example thatf(t) = S,(t) if E = 1 / and ~ T = ~ / 2 .Thus, from Eq.
(5.2-3), the Fourier transform of S,(t) is

(5.2-5)

We obtain the unit impulse for the case where E = O+ so that, from Eq. (5.2-5) we
obtain

The impulse width is infinitesimal, so that the interpretation of Eq. (5.2-6) is really
that it is Eq. (5.2-5) with E = O+. Observe that this result also can be obtained
directly by using the sifting property of the impulse developed in Section 2.4 in
Eq. (5.1-2). From our discussion in Section 5.1, S(t) and 1 are a Fourier transform
pair. This result is consistent with our discussion of S,(t) in Section 4.4.3
In accordance with our discussion in Section 5.1, the inverse Fourier transform of
F ( j w ) given by Eq. (5.1-3) should bef(t) given by Eq. (5.2-1). As an illustration,
we shall verify this in Section 5.4 for our example by substituting the expression for
F ( j o ) given by Eq. (5.2-3) in the expression for the inverse Fourier transform, Eq.
(5.1-3), and evaluating the resulting integral. For this calculation, we shall make use
of some basic properties of even and odd functions. Thus we shall discuss a few of
the basic properties of these functions in the next section.

5.3 EVEN AND ODD FUNCTIONS

An even hction,f,(t), is defined as one for whichf,(-t) =f,(t). An odd function,


f,(t), is defined as one for whichf,(-t) = - f ( t ) . For example, cos(ot) is an even
function o f t and sin(ot) is an odd function oft . Other examples of even functions

To be mathematically precise, Eq. (5.2-6)really cannot be one for all values of w but must go to zero in
accordance with Eq. (5.2-5) because E = O+ # 0. However, in accordance with OUT discussion in the
footnote in Section 4.4 following Eq. (4.4-14). we require that all transforms tend to zero as w tends to
infinity as a sufficient condition to obtain Eq. (4.4-15) from (4.4-14). This is equivalent to using Eq.
(5.2-6).
5.4 AN EXAMPLE OF AN INVERSE FOURIER TRANSFORM CALCULATION 137

are those depicted in Figs. 1.4-6 and 1.4-7, and other examples of odd functions are
those depicted in Figs. 1.4-1, 1.4-4, 1.4-8, 1.4-9, 1.4-10, and 1.4-1 1. Note that the
functions depicted in Figs. 1.4-3 and 1.4-5 are neither even nor odd. However, a
function that is neither even nor odd always can be expressed as the sum of an even
and an odd function. First note that, for any function,f(t),f(t) + f ( - t ) is an even
function o f t whilef(t) - f ( - t ) is an odd function oft. Thus we can expressf(t) as
the sum of an even function, f ,(t), and an odd function, fo(t):

f ( t ) =f,(t>+h@) (5.3-1)

where

= $ [ f ( t >+f(-Ol (5.3-2a)

and

The decomposition of functions into their even and odd components as above leads
to some important properties of Fourier transforms and LTI systems which we shall
discuss later in this chapter. Even and odd functions also are very useful in integra-
tion. One important property that we shall use in the next section is that the area
under an odd function from -co to +co is zero. This is easily seen because the area
under an odd function from -co to 0 is the negative of the area from 0 to +co. That
is,
w

S,- fo(t) dt = 0 (5.3-3a)

Also, the area under an even function from -co to +co is twice the area under it
from 0 to +oo since the area under an even function from -co to 0 is equal to that
from 0 to +co. That is,

(5.3-3b)

5.4 AN EXAMPLE OF AN INVERSE FOURIER TRANSFORM


CALCULATION

In accordance with our discussion in Section 5.1, the inverse Fourier transform of
F ( j w ) given by Eq. (5.1-3) should bef(t) given by Eq. (5.2-1). We shall verify this
for our example in Section 5.2 by substituting the expression given by Eq. (5.2-3) in
the expression for the inverse Fourier transform, Eq. (5.1-3), and evaluating the
resulting integral. For this calculation, we shall make use of the properties of even
and odd functions given by Eqs. (5.3-3).
138 THE FOURIER TRANSFORM

Substituting the expression for F(jo),Eq. (5.2-3), in the expression for the
inverse Fourier transform, Eq. (5.1-3) we obtain

O0 sin(wT) .
e'"' do (5.4-1)

From Appendix A, e'"' = cos(ot) + j sin(wt) so that we can express this integral in
the form

cos(ot) do +j"
n
1O0

--oo
sin(wT) .
~

oT
sin(wt) d o (5.4-2a)

cos(ot) do (5.4-2b)

Equation (5.4-213) was obtained by noting that the value of the second integral in Eq.
(5.4-2a) is zero. To see this, note that the hnction being integrated is an odd function
of w. Thus, with the use of the result given in Eq. (5.3-3a), we have that the value of
the integral is zero. Now, to evaluate the integral in Eq. (5.4-2b), we first note that the
hnction to be integrated is an even hnction of o so that, from the result given in Eq.
(5.3-3b), we have

do (5.4-3)

To evaluate this integral, we first use the trigonometric identity

2 sin(8) cos(4) = sin(8 + 4) + sin(8 - 4) (5.4-4)

to express the integral in the form

sin[o(T + t)] + sin[w(T - t)] do (5.4-5)


0

This form of the integral allows us to use a table of integrals in which can be found
the definite integral

(5.4-6)

where sgn(cz) is the signum function, defined in Section 1.4D, which is

-1 ifcl<O
(5.4-7)
1 ifct>O
5.5 SOME PROPERTIES OF THE FOURIER TRANSFORM 139

With the use of Eq. (5.4-6), the value of Eq. (5.4-5) is

(5.4-8)

Note that this is the same as the functionf(t) with which we started. Observe that
f(t) is discontinuous at the points t = &T and the value of the inverse Fourier
transform is equal to the average of the left- and right-hand limits off(t). In general
at points of discontinuity of a functionf(t), the inverse Fourier transform of any
function will be equal to the average of the left- and right-hand limits of the function,
f (t). This is in keeping with our discussion in Section 2.4 where we discussed left-
and right-hand limits and our definition of functions at a discontinuity following Eq.
(2.4-3).
The computation of the inverse Fourier transform for this example was not
simple. This is generally true in the computation of the inverse Fourier transform
using Eq. (5.1-3). However, as discussed in Section 5.1, Eqs. (5.1-2) and (5.1-3) are
a Fourier transform pair iff(t) is an L , function. We can consider Eq. (5.1-2) as
mapping f ( t ) into F( j w ) and consider Eq. (5.1-3) as mapping F( jo)into f ( t ) . The
two equations being a Fourier transform pair means that the mapping is a one-to-one
mapping. Thus once a given F( jo)is computed from a givenf(t), we know that the
use of Eq. (5.1-3) will result in the samef(t) with which we started. It is for this
reason that tables of Fourier transforms are so useful. However, a few properties of
Fourier transforms are required to use them effectively. These required properties are
derived in the next section. They then will be illustrated in the following sections by
applying them to the analysis of LTI systems.

5.5 SOME PROPERTIES OF THE FOURIER TRANSFORM

Properties of the Fourier transform greatly assist in their physical interpretation and
use as well as in their determination. Several important properties are derived in this
section. Then, some applications of these properties are illustrated in the next
section. For our discussion in this section,fi(t) andf,(t) are L , functions of bounded
variation, and their Fourier transforms are F , ( j w ) and F 2 ( j o ) ,respectively. Also, c1
and c2 are constants.
140 THE FOURIER TRANSFORM

1. The Linearity Property If

(5.5- 1a)

then

(5.5- 1b)

This linearity property is immediate from Eq. (5.1-2) because

2. Symmetry Properties A. The first symmetry property of the Fourier trans-


form that we need is

We show this by replacing o with -o and taking the conjugate of Eq. (5.1-2) to
obtain

F*(-jo) = [ --oo
f(t)e+'"' dt]* (5.5-3)

As discussed in Appendix A, the conjugate of an expression is obtained by replacing


every j in the expression with a -j. In consequence, we have from Eq. (5.5-3)

F*(-jo) = / 00

--oo
f*(t)e-'"' dt
(5.5-4)

which is Eq. (5.5-2), the desired result.


An important consequence of this result is the case for whichf(t) is a real
function o f t . As discussed in Appendix A,f(t) = f * ( t ) if and only iff(t) is real.
Thus we have that S{f * ( t ) ) = S { f ( t ) ]if and only if f ( t ) is a real function of t.
Consequently, fiom Eq. (5.5-4) we have

F*( -jo)= F( jo) (55 5 a )


5.5 SOME PROPERTIES OF THE FOURIER TRANSFORM 141

if and only iff(t) is a real function oft. Another form of Eq. (5.5-5a) is obtained by
taking its conjugate to obtain

F( -jw) = F*( jo) (5.5-5b)

Observe from this result that iff(t) is a real function o f t , then the magnitude of
F( jo)is an even function of o and the angle of F( jo)is an odd function of o.
B. For the case in whichf(t) is a real function oft , there is a second important
symmetry property that we need. For this we expressf(t) as the sum of its even and
odd parts as discussed in Section 5.3. Then, with the use of Eq. (5.3-1) and the
linearity property, Eq. (5.5-lb), we can write the Fourier transform off(t) as

wheref,(t) andf,(t) are real functions o f t becausef(t) is a real function oft. Now,

00 00

f,(t)[cos(wt) - j sin(ot)] dt

00 00

(5.5-7)
-00

00

=
J-, f , ( t )cos(ot) dt

The imaginary part of g{f,(t)} in Eq. (5.5-7) is zero in accordance with Eq. (5.3-3a)
becausef,(t) sin(@ is an odd function of t. Note from this result that the Fourier
transform off,(t), S(f , ( t ) } ,is a real function of o and that it is an even function of o
because cos(ot) is an even function of o.
We now examine S ( f , ( t ) )the , second term of Eq. (5.5-6).

00 00

g { f , ( t ) }=
-00
f,(t)e-""' dt = J'-f,(t)[cos(wt)
,
- j sin(ot)] dt
00 00

= J'
-00
f , ( t ) cos(ot) dt - j (5.5-8)

= -j 1 00

--03
f,(t) sin(wt) dt

The real part of 5(f;,(t)) in Eq. (5.5-8) is zero in accordance with Eq. (5.3-3a)
because f , ( t )cos(wt) is an odd function of t. Note from this result that the Fourier
transform ofJL(t), S ( f , ( t ) ) ,is an imaginary function of o and that it is an odd
function of w because sin(wt) is an odd function of w.
142 THE FOURIER TRANSFORM

We now express %( f ( t ) } , the Fourier transform off(t), in rectangular form as

where F,( jo)is the real part and Fi(j w ) is the imaginary part of F( jo).Then from
Eqs. (5.5-6), (5.5-7), and (5.5-8) we observe that iff(t) is a real function oft, then

00

F , ( j o ) = g ( f , ( t ) }= f,(t)cos(wt) dt = 2 f,(t)cos(ot) dt (5.5-loa)

and

F , ( j w ) = j g [ f , ( t ) }= 1
00

-00
f,(t)sin(ot) dt = 2 f , ( t ) sin(ot) dt (5.5-lob)

from which we note that F,( jo)is an even function of o and Fi( j w ) is an odd
function of o.

3. The Scaling Property If

(5.5-1 la)

then

(5.5-1 lb)

where IC( is the absolute value of the constant [see Eq. (1.4-25)].
To prove this property, first consider the case for which c > 0. Substituting Eq.
(5.5-1 la) in Eq. (5.1-2), we have

F( j w ) = 1 fi
00

-00
(ct)e-jwtdt (5.5- 12)

To put this equation in the form of the Fourier transform offi (t),we make the change
of variable z = ct to obtain

(5.5- 13)
5.5 SOME PROPERTIES OF THE FOURIER TRANSFORM 143

This is Eq. (5.5-1 lb) for c > 0. For the case in which c < 0, the change of variable
z = ct in Eq. (5.5-8) results in

(5.5- 14)

= --F,(
1 j:)
C

This is Eq. (5.5-1 lb) for c < 0.


A special case of interest is that for which c = - 1. We obtain for this case that if

f ( t >= f i ( - t ) (5.5- 15a)

then

F( j ~= )FI ( - j ~ ) (5.5- 15b)

4. The Time-Shift Property If

A t ) =fro - l o ) (5.5- 16a)

then

This property is immediate from Eq. (5.1-2) because we have

F(jo)= 1 fi
00

-mJ
(t - to)e-jwfdt (5.5-17)

This equation can be put in the form of a Fourier transform offi ( t ) with the change
of varible z = t - to to obtain

(5.5-18)
144 THE FOURIER TRANSFORM

This is Eq. (5.5-16b) and so we have proven the time-shift property. Note that to can
be positive or negative.

5. The Frequency-Shift Property If

f ( t ) =f i (t)ejwot (5.5- 19a)

then

Note that the frequency-shift property is the dual of the time-shift property. This
duality is to be expected since the equation for the Fourier transform, Eq. (5.1-2) and
the equation for the inverse Fourier transform, Eq. (5.1-3), are almost the same. They
differ only in a factor of 271 and in a minus sign in the exponent of e. Consequently,
we expect every Fourier transform theorem to have a dual. This stimulates us to
attempt to determine and prove dual theorems. Looking for and exploiting simila-
rities of the form of equations is an important way that new theoretical results are
obtained.
The frequency-shift property is easily proven by use of Eq. (5.1-2) because
00

F( jo)=
S,- f i (t)ejwO'e-jw' dt

J -,

The last equation is Eq. (5.5-19b).

6. The Convolution Property If

then

The convolution property was already obtained and proved in Section 4.3, Eq.
(4.3-4), where we examined the transfer function of the tandem connection of two
stable LTI systems. We proved there that the convolution of two L , functions is an L ,
function. Thus the Fourier transform off(t) exists and, in accordance with the result
obtained in Section 4.3, Eq. (4.3-4), is given by Eq. (5.5-20b).
5.5 SOME PROPERTIES OF THE FOURIER TRANSFORM 145

7. Time Differentiation Propetfy If

(5.5-2 1a)

and iff(t) is an L , function, then

To prove this property, we again begin with Eq. (5.1-2) and obtain

This integral can be expressed in the form of the Fourier transform offi ( t )by using
integration by parts, which is

(5.5-23)

For this, we choose

= e-j(u' and dv =fi'(t) df

for which

du = -joe-J''' dt and fi
u = (t)

We then have for Eq. (5.5-22)

Now the first term is zero becausefi(t) is an L , function. To see this, first note that
Ifi(t)e-'"'I = I,fi(t)Ile-j"'I = Ifi(t)l because le-Jw'I = 1. Now theareaunder Ifi(t)l
is finite because it is an L , function. Thus limt+*OOIfi(t)l = 0 otherwise the area
under Ifi(t)l would be infinite. We then have from Eq. (5.5-24)

which is Eq. (5.5-21b).


146 THE FOURIER TRANSFORM

5.6 AN APPLICATION OF THE CONVOLUTION PROPERTY

We showed in Section 4.3 that if two stable LTI systems with the unit-impulse
responses h,(t) and hb(t) are connected in tandem as shown in Fig. 4.3-1, then
the tandem connected system is a stable LTI system with the unit-impulse response

(5.6-1)

A problem that occurs is the following. We are given a stable LTI system with the
unit-impulse response h,(t). We desire to connect in tandem with it a stable LTI
system with the unit-impulse response hb(t) to form a stable LTI system with a
certain desired unit-impulse response, h(t). What is the required unit-impulse
response hb(t)? Here we are given h(t) and h,(t), and the function to be determined
is hb(t) in Eq. (5.6-1). Such an equation is called an integral equation because the
function to be determined is part of the function being integrated. Integral equations
often are difficult to solve. However, an integral equation of the convolution type is
not difficult to solve because, by use of the convolution property, Eqs. (5.5-20), we
have

Thus, by working in the frequency domain, we have only an algebraic equation to


solve. Thus, for our problem, the required transfer function is

(5.6-3)

For example, let

h(t) = fl > 0 and h,(t) = e-"'u(t) a > 0 (5.6-4)

Then, from the result in Section 4.1, Eq. (4.1-16), we obtain

1 1
H(jo)= - and H,(jo)=- (5.6-5)
B +.io a+jw

Substituting in Eq. (5.6-3), we have

a +jw
Hb( jo)= - (5.6-6)
B +jw
5.7 AN APPLICATION OF THE TIME- AND FREQUENCY-SHIFT PROPERTIES 147

The required unit-impulse response, hb(t), is the inverse Fourier transform of


H b ( j o ) . This can be obtained with the use of the linearity property by expressing
H b ( j w ) as a linear combination of functions whose inverse Fourier transforms we
know. We thus express Eq. (5.6-6) as

Hb( j w ) = 1 .-P
+- (5.6-7)
P +ju
Thus, by use of the linearity property, Eqs. (5.5-1), together with the Fourier trans-
forms given by Eqs. (5.2-6) and (4.1-16), we have

You should convince yourself of the correctness of this result by convolving the
given h,(t) with the hb(t) we have just determined to show that the result is the given
h(t). We’ll discuss this important technique in more detail as part of our discussion
of the bilateral Laplace transform.

5.7 AN APPLICATION OF THE TIME- AND FREQUENCY-SHIFT


PROPERTIES

Let the unit-impulse response of a desired stable LTI system be h(t) and let its
transfer function be H ( j w ) . Sometimes, h(t) of the desired system is not zero in
the interval -to < t < 0, so that the desired system is not causal. Thus, instead of
constructing the desired system, we construct one with the unit-impulse response
hd(t) = h(t - to), which is zero for t < 0. How is the gain and phase shift affected?
To answer this, we use the time-shift property, Eqs. (5.5-16), from which we have

Thus the gain of the constructed system is

because le-j‘”‘oI = 1 and the phase shift is

i Hd(j w ) = iH ( j w ) - ato (5.7-3)

Thus we note that the gain is unaffected. However, the difference between the phase-
shift of the constructed system and that of the desired system is (-uto).A graph of
this phase difference versus w is a straight line with a slope of -to. Thus we note in
accordance with our results in Section 1.4 that a phase shift that is proportional to
frequency corresponds to a time shift that is equal to the slope of the straight line. A
negative slope corresponds to a delay, and a positive slope corresponds to an
148 THE FOURIER TRANSFORM

advance. It is for this reason that, to eliminate phase distortion of a filter, one
attempts to make the filter phase shift proportional to w within the system pass band.
The dual of the time-shift property is the frequency-shift property given by Eqs.
(5.5-19). Consider an L , waveformf(t) with the Fourier transform F ( j o ) . For
example, the waveform could be that of a musical composition for which the spec-
trum is in the audio band (which is less than about 25 kHz). In accordance with our
discussion of symmetry properties in Section 5.5, F ( j w ) will extend from about
-25 to +25 kHz. Such a waveform cannot be transmitted efficiently by radio. One
reason is that, for efficient transmission, the length of the transmitting antenna
should be on the order of a wavelength of the waveform being transmitted (for
example, the length of an efficient short dipole is 1/2 wavelength). The relation
between wavelength, 1, and frequency,f,is 1,= c, where c is the velocity of light
(approximately 3 x 10' m/s in the atmosphere). Thus the wavelength corresponding
to a frequency of 100 MHz is 3 m, (100 MHz is in the middle of the FM band). To
utilize an FM antenna, we move the spectrum center off(t) to 100 MHz. For this, we
form the waveform g(t) by multiplyingf(t) by a 100-MHz sinusoid as

in which, for our example, oo= (271) x 10' rad/s. The waveform g(t) then can be
transmitted efficiently with an antenna of reasonable size. This is a form of ampli-
tude modulation. It is called amplitude modulation because the amplitude of the
sinusoid is being modulated (Le. altered) byf(t). The Fourier transform of g(t) is
easily obtained with the use of the frequency-shift property. For this, we first express
the cosine in exponential form, which, from Eq. (A-14), is

Equation (5.7-4) can then be expressed in the form

The expression for g(t) is now in the form required to directly apply the frequency-
shift property, Eqs. (5.5-19), from which we obtain

The first term of this equation is the spectrum off(t) centered at w = coo, and the
second term is the spectrum off(t) centered at o = -ao.The second term is due to
the second term of Eq. (5.7-9, which is required because cos(oOt)is a real function
of t . You should show that G( jo) satisfies the symmetry properties, Eq. (5.5-5),
because g(t) is a real function oft. The result given by Eq. (5.7-7) is often called the
modulation theorem because it relates to the Fourier transform of an amplitude-
modulated waveform.
5.8 AN APPLICATION OF THE TIME-DIFFERENTIATION PROPERTY 149

5.8 AN APPLICATION OF THE TIME-DIFFERENTIATION PROPERTY

The time-differentiation property often can be used to simplify the determination of


a Fourier transform. We also shall see that the use of time differentiation lends a
deeper understanding of the Fourier transform. It is often the case in science and
mathematics that a simplified procedure for obtaining a result lends a deeper under-
standing of the result. This is one of the reasons why scientists and mathematicians
search for simplified procedures or proofs of known results.
To illustrate the determination of a Fourier transform by using the time-differ-
entiation property, consider the hnctionf(t) shown in Fig. 5.8-1. The Fourier trans-
form, F ( j w ) , off(?) can be obtained by integration using Eq. (5.1-2). However, it
can be obtained more easily by use of the time-differentiation property. The proce-
dure is to differentiate the function until the differentiated function contains impulses
whose transforms are particularly simple to obtain. For our example, we first differ-
entiate f ( t ) to obtainf’(t), which is shown in Fig. 5.8-2. This function is discon-
tinuous. However, we can differentiate f ’ ( t ) by using the result obtained in Section
3.3, Eq. (3.3-15). With the use of that result, we obtain

E
f ” ( t ) = - (d[t
2a
+ ( T + 2a)] - d [ t + TI - d[t - T ] + d[t - (T + 2~()]) (5.8-1)

Now, from Eq. 5.2-6, g(d(t)}= 1 so that, with the use of the time-shift property, we
have that the Fourier transform of a unit impulse centered at t = to is

Thus we have

- ,jioT - e-joJT + e-jw(T+2a)


1 (5.8-3)

This expression can be put into a nicer form by grouping the terms as follows:

(5.8-4)
E
= - (2jsin(cm)){2jsin[w(T
2a
+ a)])
150 THE FOURIER TRANSFORM

Fig. 5.8-1 Graph off(t).

Now, from the time-differentiation property, g{f ” ( t ) ) = (jo)’F( jo).Thus we have


from Eq. (5.8-4)

2E .
( j o ) ’ ~ ( j w=
) - -sin[oa] sin[o(T
a
+ a)] (5.8-5)

The solution of this algebraic equation for F(jo)is

The algebraic manipulations have been presented in some detail so that you can
observe how an expression can be manipulated to put it in a nice form. As a check
on our work, note that F(0) = 2E(T + a). This is equal to the area under f ( t ) in
accordance with Eq. (5.2-4).A second check of our result is obtained by noting that
f ( t ) is a rectangle for the special case in which a = 0. Note, for this special case, that
our expression is identical with that we previously obtained in Section 5.2, Eq.
(5.2-3). These two checks do not guarantee the correctness of our expression, but
it does give us good confidence in our result.
As a second example, we shall use the time-differentiation property to determine
the Fourier transform of

g ( t ) = Ee-“u(t), a> 0 (5.8-7)

T (T+2a)

-E/2a

Fig. 5.8-2 Graph off’(t).


5.8 AN APPLICATION OF THE TIME-DIFFERENTIATION PROPERTY 151

The derivative of g(t) in accordance with Eq. (3.3-15) is

g'(t) = E6(t) - Ecle-"u(t)


(5.8-8)
= Ed@)- olg(t)

With the use of the time-differentiation property and Eq. (5.8-2), the Fourier trans-
form of Eq. (5.8-8) is

( j w ) G (jo)= E - uG(jo) (5.8-9)

This is an algebraic equation for G(jo)whose solution is

E
G ( j o )= ~ (5.8- 10)
CY +jo

This is the same result we previously obtained in Section 4.1, Eq. (4.1-1 6), by direct
integration.
The essence of the method is to differentiate until either (a) all impulses are
obtained as in the first example or (b) an impulse plus a function are obtained as
in the second example. If all impulses are obtained, the Fourier transform is obtained
with just a bit of algebra as in our first example. If an impulse plus a function are
obtained as, for example, A6(t) +p(t), then the Fourier transform of p ( t ) can be
obtained either by direct integration, or by differentiating p(t), or by forming a
differential equation as in our second example, Eq. (5.8-8). The technique of obtain-
ing the Fourier transform of a function by differentiation is seen to be very useful,
especially because using direct integration to obtain the Fourier transform can be a
bit tedious.
The time-differentiation property also can be used to determine a relation between
the smoothness of a time function and the asymptotic behavior of its Fourier trans-
form. Consider an L , function for which the first n - 1 derivatives contain no
impulses and the nth derivative contains K impulses. We then can express the nth
derivative off(t) as

where p ( t ) contains no impulses. The Fourier transform of this equation is


152 THE FOURIER TRANSFORM

so that

(5.8- 12)

Now, IP(jw)l -+ 0 as w -+ 00 becausep(t) contains no impulses. Thus, as w -+ 00,

Consequently, we have the result that, as w +. 00, IF(jw)I o-", -


the magnitude of the first term of Eq. (5.8-12) approaches zero faster than l / w n .
where n is the
number of times thatf(t) must be differentiated to obtain an impulse. Note that if the
nth derivative off(t) contains an impulse at t = tk, then the (n - 1)st derivative of
f ( t ) must be discontinuous at t = tk. For example, the rectangle is a discontinuous
function so that the magnitude of its Fourier transform should go to zero as l/w in
accordance with our result. This indeed is in accordance with our result given by Eq.
(5.2-3). The first derivative of the function shown in Fig. 5.8-1 is discontinuous, so
that the magnitude of its Fourier transform goes to zero as 1/w2 in accordance with
our result, Eq. (5.8-6). Note that the smoother the time function, the faster its Fourier
transform goes to zero as w +. 00. Thus we expect that the smoother a time function
is made, the more narrowband is its Fourier transform. This concept is important in
the design of pulses for transmission through a given LTI system.

5.9 AN APPLICATION OF THE SCALING PROPERTY

By use of the time-differentiation property, we just showed that the smoother a time
function, the faster its Fourier transform goes to zero with increasing 0. By use of
the scaling property, we also note that the width of a time function and the width of
its Fourier transform are inversely related. Because the transfer function of an LTI
system, H ( j w ) , is the Fourier transform of its unit-impulse response, h(t), the result
shown in this section means that the bandwidth of an LTI system is inversely related
to the time width of its unit-impulse response. For example, consider a stable LTI
system with the unit-impulse response h,(t) = e-'u(t). The system transfer function
is

H I(j w ) = / 00

-00
h , (t)e-jw' dt = e-'e-jw' dt
(5.9-1)

Let us now vary the time width of h,(t) by letting h(t) = ,$,(at), where a > 0. Then

h(t) = e-"u(at) = e-"u(t) (5.9-2)


5.9 AN APPLICATION OF THE SCALING PROPERTY 153

Note that the time width of h(t) is proportional to l l a . For example, if we define the
time width to be the time at which h(t) drops to l / & of its maximum value, then
the time width is T = ln(2)/2a = (0.34657)la. Now, using the scaling property of
the Fourier transform, we have

a
(5.9-3)

This, of course, is the result we have previously obtained by direct integration, Eq.
(4.1-16). The gain of this system is

(5.9-4)

The graph of the gain is a bell-shaped curve with a maximum value equal to l / a at
w = 0. As discussed in Section 4.3 following Eq. (4.3-13), the gain drops by 3 dB
(that is, to 1/& of its maxuum value) at o = a. Thus the system is is a low-pass
filter with a 3-dB bandwidth, B, equal to a. Using the time width, T = (0.34657)Ia
defined above, we have that the product of T , the time width of h(t), and B, the
system 3-dB bandwidth, is TB = 0.34657. Thus we observe that the bandwidth is
inversely proportional to the time-width of h(t).
The inverse relation between width of a time function and the bandwidth of its
transform imposes certain trade-offs in system design. For example, one method of
transmitting data from one computer to another is to send a sequence of pulses along
a transmission line in which a pulse represents one of the binary values. In order for
the receiving computer to determine the pulse sequence that was sent, it is necessary
that the pulses not overlap very much. This means that the time from the start of a
pulse to the start of the next pulse, T,, must be proportional to the pulse width, Tp.
Now the data rate, which is the number of pulses per second that can be sent, is
N = I / T , . Because T, is proportional to Tp,we have that N is proportional to l / T p .
In accordance with our discussion above, the pulse width, Tp, is inversely propor-
tional to its bandwidth, B, so that B is proportional to l/Tp. Because I/Tp is
proportional to N , we observe that the data rate is proportional to the required
transmission line bandwidth so that the higher the data rate, the larger must be
the transmission line bandwidth. This is one of the reasons why fiber-optic transmis-
sion lines have replaced coaxial transmission lines for communication between
computers.
154 THE FOURIER TRANSFORM

5.10 A PARSEVAL RELATION AND APPLICATIONS

In this section we shall derive and discuss an important and useful relation called a
Parseval r e l a t i ~ nThis
. ~ relation will be derived using the results we have already
obtained as a further illustration of their application.
For the derivation of the Parseval relation, we consider two complex L , functions,
f i ( t ) andf,(t). The convolution offT(-t) withf,(t) is

The Fourier transform of g(t) is, in accordance with the convolution property, Eq.
(5.5-20),

Now, to obtain the Fourier transform offr(-t), we first use the symmetry property
given by Eq. (5.5-4) to obtain

Then, with the use of the scaling property with c = -1, Eq. (5.5-15), we obtain

Thus we obtain for Eq. (5.10-2)

Now, the inverse Fourier transform of G( j w ) evaluated at t = 0 is, from Eq. (5.1-3),

g(O) =
2n
J -00
G ( j w ) dw
(5.10-6)

4Marc-Antoine Parseval-deschknes, 1755-1836. Many relations of the form discussed in this section are
called Parseval relations-although many only remotely resemble Parseval’s original result, which he
considered only as a formula for summing certain types of series. His result was later extended to Fourier
theory and more abstract treatments of analysis. Many of the relations so obtained are called Parseval
relations. The relation obtained in this section is such an example.
5.10 A PARSEVAL RELATION AND APPLICATIONS 155

However, from Eq. (5.10-l), we have

Because both Eqs. (5.10-6) and (5.10-7) are equal to g(O), we have (with the use o f t
instead of as the dummy variable of integration)

This is the desired Parseval relation. An important special case is that for which
fi 0)= m= A t ) is

(5.10-9)

This last relation is often called the energy theorem. To understand the reason for
this name, let f ( t ) be the current through a 1-R resistor. Then the total energy
dissipated in the resistor is given by the left-hand side of Eq. (5.10-9). Because
the right-hand side of this equation is the integral over all frequencies of
1/274F( jo)12,we can interpret it as an energy density spectrum in joules/radian
per second. Equivalently, because one radian per second equals 27~hertz, we can
interpret IF( jw)I2 as an energy density spectrum in joules/hertz. For this reason,
IF( jw)I2 is often called the energy density spectrum off(t).
To make the interpretation of an energy density spectrum more concrete, consider
an ideal method to measure the energy density spectrum of a waveform, x(t). Ideally,
to measure the energy of x ( t ) in the band of frequencies 0 < o < ol, we would
apply x ( t ) to an ideal low-pass filter with the cutoff frequency wl. An ideal low-pass
filter is one that has unity gain in the passband and has zero gain for all frequencies
above the cutoff frequency. That is,

(5.10- 10)

As shown in Fig. 5.10-1, the output of the ideal low-pass filter is a voltage
which is applied to a 142 resistor. In accordance with the convolution property,
Eq. (5.5-20), the Fourier transform of the output waveform, y(t), is
156 THE FOURIER TRANSFORM

Fig. 5.10-1 The measurement of an energy density spectrum.

Y ( j w ) = H ( j w ) X ( jo).Thus, with the use of the energy theorem, Eq. (5.10-9), the
total energy dissipated in the 1-0 resistor is

(5.10-1 1)

Thus, with the use of Eq. (5.10-lo), the total energy dissipated in the resistor is

(5.10- 12)

Similarly, to measure the total energy contained by the waveform x(t) in the
frequency band o,< o < 0 2 ,we would use an ideal bandpass filter with unity
gain in the given band and zero gain outside the given band. In accordance with Eq.
(5.10-1 l), the total energy dissipated by the 1-R resistor is then

00

-00
k(t)l2dt =1' 271
-WI

-o*
IX(jo)I2do IX(jo)12 d o (5.10-13)

If x ( t ) is a real function of t, then IX(jo)l is an even function of o in accordance


with the symmetry property, Eq. (5.5-5). Thus, for the physical case in which x ( t ) is
a real function o f t , both integrals on the right-hand side of Eq. (5.10-13) have the
same value so that we also can express Eq. (5.10-13) in the form

Using the relation o = 2nf, this expression also can be written as

(5.10-15)
J -W Jr,
5.1 0 A PARSEVAL RELATION AND APPLICATIONS 157

Observe that the experimental measurement of the energy contained by a real-time


function, x(t), in a given frequency bandf, < f <f2 hertz is numerically equal to
twice the area under IX(j2.nf)12 in the given band. That is, IX(j2nf)I2 is the actual
energy density spectrum of x(t) in joules per hertz that we would ideally measure in
the laboratory.
An important application of the energy theorem is to determine the integral
square error of an approximation. To illustrate this application, we assume that we
desire to construct an LTI system with the unit-impulse response

forO5tsT
(5.10-16)
0 otherwise

Rather than construct an LTI system with the desired unit-impulse response, it is
simpler to approximate the desired system by constructing an LTI system with the
unit-impulse response

h,(t) = Ae-“u(t) (5.10-17)

The difference between the two transfer functions is

We use the integral square error as the measure of this difference, which is
00

E= [
J -00
IE(jw)12 d o (5.10-18b)

Now

(5.10-19)

Rather than evaluating this integral, we shall determine the transfer fimction by using
the Fourier transform properties derived in Section 5.5 in order to further illustrate
how they can be used to simplify calculations. You should draw diagrams of the
various functions discussed below in order to fully understand the various equations.
We begin by differentiating hd(t), which is

where

(5.10-20b)
158 THE FOURIER TRANSFORM

Thus, in accordance with the differentiation property,

joH,(jw) = 1 - F,(jw) (5.10-2 1)

To determine F d ( j w ) ,we note that

1
a t ) =p ( t ) - w - TI1 (5.10-22)

Thus, with the use of the differentiation property we have

(5.10-23)

so that

(5.10-24)

Substituting Eq. (5.10-24) into Eq. (5.10-21), we then obtain

(5.10-25)

You should verify this result by actually evaluating the integral in Eq. (5.10-19) and
note how much effort was saved by using the Fourier transform properties. Now,
from Eq. (4.1-16), the Fourier transform of h,(t) is

(5.10-26)

The error, E, can now be evaluated by substituting the expressions for H d ( j w )and
H,(jw) into Eqs. (5.10-18) and evaluating the resulting integral. This approach
unfortunately leads to integrals that are difficult to evaluate. A much better approach
is to use the energy theorem, from which we have

E= 100

-00
IE(jw)I2 dw = 271 1
00

--oo
le(t)I2 dt (5.10-27)
5.10 A PARSEVAL RELATION AND APPLICATIONS 159

where

Thus we have

00

le(t)t2 dt = 271 1 00

--oo
Ihd(t) - h,(t)I2 dt

= 271 1
00

-00
hi(t) dt + 271 100

-00
hi(t) dt - 471 1
00

-00
h,(t)h,(t) dt
(5.10-29)

Now

-00
hi(t) dt = 1: [ 1- ]; 1
dt = T (5.10-29a)

00
A2
dt = - (5.10-29b)
2a

and

Substituting these values, we then obtain

(5.10-30)

A better form of the error is the normalized error, which is the error E divided by the
integral-squared value of &(jm). With the use of Eq. (5.10-29a), this value is

D= 1 00

-00
IHd(jm)I2 d o = 271
271
3
(5.10-31)

Thus the normalized error is

(5.10-32)
160 THE FOURIER TRANSFORM

If we choose a = 1/T, the normalized error is

3
E,, = 1 + -A2
2
- 6e-’A (5.10-33)

The value of E,, in this expression is the smallest for A = 2 / e = 0.736, for which it
is E,, = 1 - 6e-2 = 0.188. The percentage integral-square error thus is 18.8%. In
the design of a filter, the transfer function is specified. The system impulse response
of the desired filter may be one that is difficult to construct, and so a system with an
impulse response that is close to the desired one is considered. The method used in
this example is a good method for comparing various approximations and to deter-
mine the best values of parameters (such as A and a in the example above) to use in
the approximation.
As a final illustration of an application of the Parseval relation, we illustrate its
use in the evaluation of definite integrals. This application is not directly related to
system theory. However, you may have wondered in the past how many of the
integrals you’ve seen in integral tables were determined. Well, one method is to
use a mathematical result by which the value of the definite integral can be indirectly
determined. One of the mathematical results used is the Parseval relation, Eq.
(5.10-8). To illustrate its use, consider the evaluation of the integral

sin(aco) sin(bw)
do, a>b>O (5.10-34)

To use the Parseval relation, Eq. (5.1049, we first want to extend the lower limit of
the integral to -co. For this, we note that the function being integrated is an even
function of w. Thus, from our discussion in Section 5.3 of even and odd functions,
Eq. (5.3-3b), we have

(5.10-35)

Now the Fourier transform of a rectangle was determined in Section 5.2, Eq. (5.2-3).
From that result, we observe that the function under the integral above is similar to
the product of the Fourier transform of two rectangles. Thus we express the fimctions
being integrated in the form of the product

O0 sin(ao) sin(bo)
do (5.10-36)

Now the Parseval relation, Eq. (5.104, is


5.1 1 TRANSFER FUNCTION CONSTRAINTS 161

Thus we let F , ( j o ) = sin(ao)/ao and F 2 ( j o )= sin(bo)/bw for which, from our


result, Eq. (5.2-3), fi ( t ) andf,(t) are the rectangles

and h ( t ) = -r
:h ('2+hh)
~ (5.10-3 8)

Thus, with the use of Eq. (5.10-37),

ab
I = -271
2
1c13

--oo
fi*(t&(t) dt
(5.10-39)
= -271-1
ab 1 O0 r (ty+)ar ( T t) + b dt
2 4ab --oo

+ + +
Now note that r[(t a)/2a]r[(t b)/2b] = r[(t b)/2b] because a > b > 0. Thus
the value of the integral in Eq. (5.10-39) is 2b because it is just the area of a rectangle
with a height equal to one and a width equal to 2b. We thus have that the value of the
integral, Eq. (5.10-35), is

ab 1 71
I=-2~-2b=-b (5.10-40)
2 4ab 2

It is interesting to note that the value of the integral is independent of a as long as


a >b> 0. The technique just illustrated is used to evaluate many definite integrals.

5.1 1 TRANSFER FUNCTION CONSTRAINTS

The input-output mapping of any BIBO-stable LTI system was shown in Section 3.7
to be completely determined by its unit-impulse response, h(t). In Section 3.5, we
discussed the fact that every physical system is causal and we showed that a neces-
sary and sufficient condition for an LTI system to be causal is that h(t) = 0 for t < 0.
This condition imposes certain constraints on the transfer function, H ( jo).Some of
the limitations causality imposes on H ( j w ) are discussed in this section. Thus we
consider only physical LTI systems for which h(t) = 0 for t < 0 and is a real
function o f t for our development in this section.

5.11A The Hilbert Transform5


One constraint that causality imposes on the transfer function is that its real and the
imaginary parts cannot be independently specified. In fact, the real part of the
transfer fimction can be determined from its imaginary part and vice versa. The

'David Hilbert (1 862-1 943) was one of the leading mathematicians of the twentienth century who made
major contributions to many fields of mathematics.
162 THE FOURIER TRANSFORM

relation between the real and imaginary parts of the transfer function is called a
Hilbert transform, which we develop and discuss in this section.
For our development, first express h(t) in terms of its even and odd parts as

where, from our discussion in Section 5.3, the even part of h(t) is

(5.1 1-2a)

and the odd part of h(t) is

1
h,(t) = - [h(t)- h(-t)] (5.1 1-2b)
2

We assume that neither h,(t) nor h,(t) is zero for all t. The resolution of the unit
impulse, d(t), into two nonzero components as in Eq. (5.1 1-1) is impossible because
as discussed in Section 3.3, the unit impulse, d(t), is an even function. Consequently,
unit-impulse responses, h(t), that contain an impulse at t = 0 are excluded from our
present discussion but will be included at the end of this discussion.
Because h(t) = 0 for t < 0, we have that h(-t) = 0, for t > 0. Consequently,
from Eqs. (5.11-2) we obtain

1
h,(t) = -h(t) for t > 0 (5.1 1-3a)
2

and

1
h,(t) = -h(t) for t > 0 (5.1 1-3b)
2

Note that h,(t) = h,(t) for t > 0 and consequently h,(t) = -h,(t) for t < 0. This is
logical because, in order that h(t) = 0 for t < 0, the sum of the even and the odd
parts of h(t) must equal zero for t < 0. This immediately implies that the even and
the odd parts of h(t) must be equal for t > 0, from which Eqs. (5.1 1-3) follow. Thus
we observe that h,(t) can be determined from h,(t) and also that h,(t) can be
determined from h,(t) of a causal LTI system.
Now, we showed in Section 4.1, Eq. (4.1-8), that the transfer function of a stable
LTI system is given by the relation

H(jw)= 1 00

--bo
h(t)e-@” dt (5.1 1-4)

This is recognized as the Fourier transform of h(t) in accordance with our discussion
in Section 5.1. Again, it should be noted that the transfer function only arose in
5.1 1 TRANSFER FUNCTION CONSTRAINTS 163

Section 4.1 in connection with our determination of the characteristic function of a


stable LTI system. It turned out that the phasor is a characteristic function of a stable
LTI system and its characteristic value is what we call the transfer function. As
shown in Section 4.1, the transfer function turns out to be the Fourier transform of
h(t) only as a consequence of the fact that the phasor is a characteristic function. If
some function other than the phasor were a characteristic function, then the transfer
function would not be the Fourier transform of h(t). Because the phasor generally is
not a characteristic function of a linear time-varying (LTV) system, the use of
Fourier transform theory in the study of LTV systems is limited.
For our present discussion, express the transfer function in rectangular form as

H(jo)= H r ( j o )+ j H ; ( j o ) (5.11-5)

where Hr( jo)is the real part and Hi( jo)is the imaginary part of H( jo).We now
make use of one of the symmetry properties of the Fourier transform we obtained in
Section 5.5, Eqs. (5.5-10). With the substitution of h(t) forf(t), these equations are

H,(jo) = 1 00

-00
h,(t) cos(ot) dt (5.1 1-6a)

and
00

H,(jo) = -
S,- h,(t) sin(ot) dt (5.11-6b)

Because h,(t) cos(wt) and h,(t) sin(ot) are even functions of t , the integral of these
functions from --oo to 0 is equal to their integral from 0 to 00. Consequently, as in
Section 5.3, Eq. (5.3-3b), we can express Eqs. (5.1 1-6) as twice the integral from 0
to 00 as

(5.1 1-7a)

and

H ; ( j w ) = -2
1: h,(t) sin(wt) dt (5.11-7b)

Because these integrals are only over positive values of t and because from
Eqs. (5.1 1-3) we have h,(t) = h,(t) for t > 0, we can replace h,(t) by h,(t) in Eq.
(5.1 1-7a) and we can replace h,(t) by h,(t) in Eq. (5.11-7b) to obtain

H r ( j o )= 2
I h,(t) COS(W)dt (5.11-8a)
164 THE FOURIER TRANSFORM

and

H i ( j o ) = -2
:J h,(t) sin(ot) dt (5.11-8b)

From Eq. (5.1-3) [also Eq. (4.4-15)], the inverse Fourier transform of H(jo)is

h(t) =
271
/ 00

-m
H(jo)ejw' dt (5.1 1-9)

Now express the product of H ( j w ) and the phasor in rectangular form:

+ j)~ ~ ( j o ) ] [ c o s ( o+
H(jo)ejwt = [~,.(jo t )j sin(wt)]
= [Hr(jo)cos(ot) - Hi( jo)sin(wt)] (5.11-10)
+ j [ H , ( j o ) sin(wt) + Hi(jw)cos(ot)]

Substituting in Eq. (5.11-9), we have

[If,.(jo)cos(ot) - Hi( j w ) sin(wt)] d o


(5.1 1-11)
+ j- /
l o o
2n -00
[H,( j w ) sin(ot) + Hi(j w ) cos(wt)J do
Because h(t) is real, we expect the value of the second integral in Eq. (5.11-1 1) to be
zero. To show this, we first note from Eq. (5.1 1-8a) that H,(jw) is an even function
of w because cos(cot) is an even function of o and also from Eq. (5.11-8b) that
Hi( Jo)is an odd function of o because sin(ot) is an odd function of o.Because the
product of an even function and an odd function is an odd function, we have that
H,( jo)sin(ot) and Hi( jo)cos(ot) are odd functions of o.Thus the second integral
in Eq. (5.11-11) is the integral of an odd function of w from -00 to 00, which is zero
in accordance with our discussion in Section 5.3, Eq. (5.3-3a). Thus we have for Eq.
(5.11-11)

[Hr(Io)cos(ot) - H i ( j o )sin(wt)] do (5.1 1-12)

Note that Hr( jo)cos(ot) is an even function of o because both H r ( j o ) and cos(ot)
are even functions of o.Also note that H,(jw)sin(wt) is an even function of w
because both Hi(jo)and sin(@ are odd functions of o and the product of two odd
5.1 1 TRANSFER FUNCTION CONSTRAINTS 165

functions is an even function. Using Eq. (5.3-3b), Eq. (5.11-12) also can be
expressed as twice the integral from 0 to 00:

'J
h(t) = - [H,( j w ) cos(ot)
n o
- Hi( j w ) sin(ot)] dw

= 1
710
00

Hr(j w ) cos(ot) d o - Hi(


j w ) sin(wt) dw
(5.1 1-13)

Note that the first integral in Eq. (5.1 1-13) is an even function o f t becaue cos(wt) is
an even function o f t and the second integral is an odd function o f t because sin(ot)
is an odd function of t . Thus, with Eq. (5.1 1-l), we have

Hr( j w ) cos(wt) dm (5.1 1-14a)

and

H i ( j w )sin(wt) dw (5.1 1-14b)

Notice that Eqs. (5.11-7a) and (5.11-14a) are a transform pair; also Eq. (5.11-7b) and
(5.1 1-14b) are a transform pair. This observation, along with the fact from Eq.
(5.1 1-3) that he(?)= h,(t) for t > 0, means that H r ( j w ) and H i ( j w ) are related.
To determine this relation explicitly, we substitute Eq. (5.1 1-14b) in Eq. (5.1 1-Sa).
In order not to confuse the w in Eq. (5.1 1-8a) with the integration variable, o,in Eq.
(5.1 1-14b), we first use u for the integration variable instead of w in Eq. (5.1 1-14b)
to express it as

(5.1 1-15)

We now substitute Eq. (5.1 1-15) in Eq. (5.1 1-Sa) to obtain

H,(jw) = - - 51; 1: Hi( j u ) sin(ut) cos(wt) du dt (5.1 1-16a)

Similarly, by substituting Eq. (5.1 1-14a) in Eq. (5.1 1-Sb), we obtain

H,( jo)= - - 51; 1; H r ( j u )cos(ut) sin(wt) du dt (5.1 1-16b)

These are important equations. Equation (5.1 1-16a) shows that the real part of the
transfer function can be determined from the imaginary part of the transfer function.
Note that Eq. (5.11-16b) is the inverse of Eq. (5.11-16a), which states that the
166 THE FOURIER TRANSFORM

imaginary part of the transfer function can be determined from its real part. Trans-
forms of this type are known as Hilbert transforms, and Eqs. (5.1 1-16) are called a
Hilbert transform pair. Thus we have shown that the real and the imaginary parts of
the transfer function are related by a Hilbert transform. Again note that, to this point,
we have excluded from our development unit impulse responses that contain an
impulse at t = 0.
As a simple example illustrating the Hilbert transforms, let the real part of a
transfer function be given as

1
H,.(jo) = - (5.11-17)
1+ o 2

We then can determine the imaginary part of the transfer function required for the
system to be causal from the Hilbert transform, Eq. (5.11-16b), by substituting Eq.
(5.11-17) to obtain

Hi(
jo)= - -
n
TIw
0 0
1
-cos(ut)
1+u2
sin(ot) du dt (5.11-18)

To evaluate this double integral, we first integrate with respect to the variable u. This
integral, whose value can be obtained from a standard table of definite integrals, is

(5.1 1-19)

Substituting this result in Eq. (5.1 1-18) we obtain, again with the use of a standard
table of definite integrals,

w
e-' sin(ot) dt = - - (5.1 1-20)
1 +o2

By combining Eqs. (5.1 1-17) and (5.1 1-20), the transfer function obtained is

(5.11-21)

In Section 4.1, this was shown to be the transfer function of a causal and stable LTI
system with the unit-impulse response h(t) = e-'u(t).
Even though the integrals in Eqs. (5.1 1- 16) are generally not easy to evaluate, the
importance of the Hilbert transform relations for us is that they show that the real
and the imaginary components of the transfer function of a causal and stable LTI
system are related so that they cannot be independently specijed. Equations are
important not just for calculation purposes, but also for their theoretical statements
as in our present instance.
5.1 1 TRANSFER FUNCTION CONSTRAINTS 167

If an impulse response, h(t), of a stable and causal system contains an impulse


with an area equal to A at t = 0, it can be expressed in the form h(t) = A 6 ( t ) g(t), +
where g(t) does not contain an impulse at t = 0. Thus the system transfer function is
+
H ( jo)= A G( j w ) . Because g(t) does not contain an impulse at t = 0, the real
and the imaginary parts of G( jo)are related by the Hilbert transform. Thus, for
cases in which h(t) contains an impulse at t = 0, the real part of the transfer function
is only determined within an additive constant from the imaginary part of the
transfer function.

5.1 1B The Paley-Wiener Criterion


Causality not only imposes a relation between the real and the imaginary parts of the
system transfer function as discussed above, but imposes a constraint on the gain that
can be obtained. This constraint is called the Paley-Wiener criterion. The criterion is
a system interpretation of a mathematical theorem obtained by Raymond E. A. C.
Paley and Norbert Wiener.6
The criterion applies only to LTI systems for which

(5.1 1-22)

where H ( jo)is the system transfer function. For such systems, the theorem states
that the LTI system is not causal if

(5.1 1-23)

Furthermore, if the value of the integral, I , in Eq. (5.1 1-23) is finite, then there exists
a phase function Q(w) such that

is the transfer function of a causal LTI system.


We shall not prove this result because its proof requires mathematics that would
take us too far afield.' The importance of the Paley-Wiener criterion is the funda-
mental restriction on the gain of causal LTI systems that it specifies. Before exam-
ining this restriction, we consider the class of LTI systems delineated by Eq.
(5.1 1-22). This equation can be expressed in terms of h(t) by means of the energy

The Paley-Wiener criterion was first published as Theorem XI1 in Paley, R. E. A. C., and Wiener, N. The
Fourier Transjorms in the Complex Domain, American Mathematical Society Colloquium Publication,
Vol. 12, 1934, Chapter I , Quasi-Analytic Functions.
'Although a proof of this result is contained in the work cited in the reference given in footnote 6 , an
easier-to-follow proof is contained in Zadeh, L. A,, and Desoer, C. A. Linear System Theoy, the State-
Space Approach, McGraw-Hill, 1963, pp. 423428.
168 THE FOURIER TRANSFORM

theorem, Eq. (5.10-9), from which we have that the class of LTI systems to which the
criterion applies are those for which

00

E= [ lh(t)I2 dt < 00 (5.1 1-25)

It can be shown that Eq. (5.1 1-25) is satisfied if h(t) is a bounded L , function. This
means that E given by Eq. (5.1 1-22) is satisfied by LTI systems which are stable so
that h(t) is an L , function and for which h(t) contains no impulses at all (so that h(t)
is bounded). The theorem states that such systems cannot be causal if I given by Eq.
(5.1 1-23) is infinite.
To examine the restriction imposed by Eq. (5.1 1-23), first note that it only
involves the system gain, IH(jw)l. Thus the criterion involves a constraint only
on the system gain. First consider the ideal low-pass filter for which the gain is
given by Eq. (5.10-10). Such a system cannot be causal because IH(jw)l = 0 for
101 > w , so that I In IH(jw)ll is infinite for 101 > w , and consequently I = 00 in
Eq. (5.1 1-23). From this example we see that the gain of a causal filter cannot be
zero over any frequency interval. However, the gain of a causal system can possibly
be zero at discrete frequencies.*
Next, consider an LTI system for which the gain is

IH(jw)l = e--awp, where a > 0 andp > 0 (5.1 1-26)

Such a system also cannot be causal for p 2 1 because we then obtain from Eq.
(5.1 1-23)

(5.1 1-27)

From this example we observe that, as w + 00, the gain of any causal LTI system
must go to zero slower than an exponential in w-that is, slower than e P w .
In practical applications, the Paley-Wiener criterion is not as restrictive as it first
appears. For example, even though an ideal low-pass filter cannot be causal, we can
make the gain very small for IwI > 0,. For example, consider an LTI system with
the gain

(5.1 1-28)

in which E is very small (but not zero!). For this example, the value of I given by Eq.
(5.11-23) is finite so that a causal LTI system with this gain function does exist.

'The gain can even be zero at an infinite set of discrete frequencies, w = w, for n = I , 2 . 3 , . . .
5.1 1 TRANSFER FUNCTION CONSTRAINTS 169

We note from Eq. (5.11-23) that, for the ideal gain functions in our examples, I is
infinite due to the behavior of the system gain over frequency intervals where the
gain is very small. However, as in the example above, even though a causal LTI
system does not exist for such ideal gain functions, it does exist for a system with a
gain function that differs slightly from an ideal gain function only in frequency
intervals where the ideal gain function is very small. This often is an acceptable
approximation.
The relation between causality and prediction was discussed in Section 3.5. Thus
it should not be surprising that there is a close connection between this criterion and
one for the prediction of a waveform. Let the mean-square value of a waveform,f(t),
be finite and let its power density spectrum be @(a). Then it can be shown that the
future off(t) can be completely determined from its own past with arbitrarily small
error if9

(5.1 1-29)

Note that this is Eq. (5.11-23) with IH(o)I replaced by @(w). From our discussion
above, we note that the future of any waveform for which its power density spectrum
is zero in any frequency interval or for which its power density spectrum goes to zero
faster than an exponential in w as w + 00 can be predicted with arbitrary small
error." Thus, the power density spectrum of your speech waveform cannot be
nonzero just in the audio band but must be nonzero even in the microwave band
and for all frequencies above. The power density spectrum will be rather small at
very high frequencies but not zero because it is that small amount of power in the
very high frequencies that makes the prediction error grow with increasing time into
the future at which the prediction is made. If the future of your speech waveform
were predictable with arbitrary small error, then all that you will say in the future is
predetermined and you would not be able to change it. Thus your free will would
definitely be limited. Ethics and morality then become questionable concepts
because without free will how can we hold a person responsible for what he or
she says or does?
For an LTI system to be causal, we require h(t) = 0 for t < 0; and for the LTI
system to be stable, we require h(t) to be an L , function. These are easy constraints
to impose in the design of a causal and stable LTI system in the time domain.
However, the design in the frequency domain is more difficult because the specified
transfer function must satisfy the constraints discussed in this section which are not
simple to apply. Even though these constraints are not simple to apply, they are
important to understand because they identify fundamental limitations on the trans-

' Wiener, N. Extrapolation, Interpolation, and Smoothing of Stationary Time Series, The Technology Press
of MIT and John Wiley & Sons, New York, 1949.
l o Schetzen, M., and AI-Shalchi, A. A. Prediction of Singular Time Functions, M.I.T. Quarterly Progress
Report 67, Oct. 15, 1962, pp. 126-137.
170 THE FOURIER TRANSFORM

fer function of a causal LTI system. Design and analysis in the frequency domain,
however, can be greatly simplified by working in a complex frequency plane. For
this we develop the bilateral Laplace transform in the next chapter. We shall see that
the Fourier transform is a special case of the bilateral Laplace transform. The use of
the complex frequency plane associated with the bilateral Laplace transform also
will enable us to gain insight into many frequency domain operations.

PROBLEMS

5-1 In Chapter 3 it was shown that any positive pulse with an infinitesimal width
can be used as a unit impulse. This was illustrated with the rectangular pulse
in Section 5.2. As another example, consider the function f ( t ) = Ae-'IfI,
where a > 0.
(a) Determine F(jo),the Fourier transform off(t).
@) Show that F(0) = Jym f ( t ) dt and determine A so that F(0) = 1.
(c) With the value of A determined in part a, show that the width off(t)
decreases as a increases and that limu-,m f ( t ) = s(t).
(d) For a given value of a, for what range of o will 1 p F(jo)p 0.99 so
that f ( t ) will be a very good approximation of the unit impulse in this
frequency range?

5-2 Determine and sketchf,(t) andf,(t), the even and odd parts respectively of the
following functions.
(a) fi 0) = W T )
(b) h(t)= (1 - t / T ) r ( t / T )
(c) h(t)= e+u(t)
(d) h(t)= cos(ot)u(t>
(e) &(t) = sin(ot)u(t)

5-3 Show that


(a) The product of an even function f,,( t ) and an even function L2( t ) is an
even function f,(t).
(b) The product of an odd function f,,( t ) and an odd function, f,,(t) is an
even function f,(t).
(c) The product of an odd function, f,,( t ) and an even function, f,,(t) is an
odd functionf,(t).
+
Now letf(t) =f,(t) +f,(t) and g( t) = g,(t) g,(t). Use the results obtained
above to determine:
(a) The even and odd parts of h(t) = f ( t ) g ( t ) .
@) The even and odd parts of h(t) = f ( t ) / g ( t ) .Suggestion: Multiply the
numerator and denominator by g,(t) - g,(t).
PROBLEMS 171

5-4 Show that the value of second term of Eq. (5.4-2a) is zero.

5-5 Letf(t) = eParu(t)in which CI > 0.


(a) Determine the even part, fe(t), off(t).
(b) Determine the odd part, fo(t), off(t).
(c) Determine the Fourier transform offe(t) and show that it is equal to the
real part of F(jo).
(d) Determine the Fourier transform offo(t) and show that it is equal to j
times the imaginary part of F ( j w ) .

5-6 (a) Show that iff(t) is a real function (Le., its imaginary part is zero), then
S{f(-t)I = F*(jo).
(b) Use the result of part a to obtain the Fourier transform off(t) = $‘u(-t)
and verify your result by direct integration.
(c) Use the result of part a to show that the Fourier transform of the even part
of a real function is a real function of w and that the Fourier transform of
the odd part of a real function is an imaginary function of o.

5-7 (a) Obtain the Fourier transform of r(t/2a).


(b) Determine g(t) = r(t/2a)*r(t/2a).
(c) Now determine G ( j w ) by using the convolution property.
(d) Verify your result of part c by obtaining the Fourier transform of the
triangle as a special case off(t) shown in Figure 5.8-1 with T = 0 and
using the time-shift property.

5-8 Determine the Fourier transform of the triangle,

For this determination, use the convolution property together with the result
given by Eq. (4.2-23).

5-9 (a) Let f ( t ) = d ‘ r ( t / T ) .Use the time-differentiation property to determine


F( io),the Fourier transform of f ( t ) . Verify your result by determining
F( jo)using direct integration, Eq. (5.1-2).
(b) Check your result by letting a = 0 to obtain the Fourier transform of the
rectangle, Eq. (5.2-3).
(c) The transform of @‘u(t) is obtained by letting T + 00. Show that we
obtain the correct result if a < 0 but that the limit doesn’t exist if a 1 0.

5-10 In Section 4.4, we obtained the Fourier transform pair given by Eqs. (4.4-8)
and (4.4-12). This was obtained by evaluating the integral, which was not
simple. The same result will be obtained in this problem in a simple manner
172 THE FOURIER TRANSFORM

by using properties of the Fourier transform. For this letf(t) = e-”?.


Show thatf(t) satisfies the differential equationf’(t) +
2a2tf(t) = 0.
Use the time- and the frequency-differentiation properties to show that
the Fourier transform off(t), F( jo),satisfies the differential equation

1
+
F’( jo) -wF( jo)= 0
2a2

Because the differential equations in parts a and b have the same form,
their solutions must have the same form. Use this observation to show
that F( jo)= Ce-w2/4az.
Determine the constant, C, by using Eq. (5.1-2) at w = 0 and Eq. (5.1-3)
at t = 0.

5-1 1 Use the time-differentiation property to obtain the Fourier transform of


f ( t ) = sin(o,t+4 ) r ( t / T ) . Check your result with the special case for
which o, = 0.

5-12 The unit-impulse response of an LTI system is

h(t) = cos(7lt)r(2t) =
{~ ( n r ) for o 5 t 5
otherwise

The system transfer function, H ( j w ) , is

Rather than evaluating this integral, determine the transfer hnction by using
the Fourier transform properties derived in Section 5.5 in order to hrther
illustrate how they can be used to simplify calculations.

5-13 As discussed in Section 5.5, we expect every Fourier transform theorem to


have a dual. One property derived in that section is the time-differentiation
property: %{ f ’ ( t )}=j o F ( j o ) . Show that the dual of this property, called the
frequency-differentiation property, is 5{-jtf(t)} = F’( jo).

5-14 Verify the result given by Eq. (5.6-8) by performing the convolution, Eq.
(5.6-1).
PROBLEMS 173

5-15 Let

(a) Determine F , ( jo)by using the time-differentiation property.


(b) Use the convolution property to determine G( jo).
(c) Obtain F 2 ( j o ) by using the result of part b and the time-shift property.
(d) Obtain F,(jo)by using the result of part c together with the time-shift
property.
(e) Obtain F3(jo)by differentiatingf,(t) twice and compare your result with
that of part d.

5-16 Express g(t) = ecafsin(w,t + $ J ) r ( t / Tin) the form

Use the result obtained in Problem 5-9 together with the frequency-shift
theorem to obtain G ( j u ) .

5-17 Adapt the frequency-shift property and use the result of Problem 5-11 to
obtain the Fourier transform of g(t) = e-" sin(o,t +
$J)r(t/T).(Note that the
Fourier transform of g(t) exists for any value of a because g(t) is nonzero
only over a finite interval and so g(t) is L,.)

5-18 Obtain the Fourier transform of the functionf(t) shown in Fig. 5.8-1 by direct
integration using Eq. (5.1-2) and so verify the result given by Eq. (5.8-6).

5-19 A given pulse with a width of T seconds is transmitted along a transmission


line to a receiver. In order that the pulse be received with an acceptable
distortion, the transmission line must have a minimum bandwidth of W hertz.
(a) What would be the minimum required bandwidth if the pulse width were
reduced to 0.75T seconds?
(b) What minimum pulse width can be transmitted with acceptable distortion
if a transmission line with a bandwidth of only 0.75 W were used?
174 THE FOURIER TRANSFORM

5-20 The transfer function of a given stable LTI system is

cos(oT)
H(jo)= ~

3 +jo

(a) Determine the system unit impulse response.


(b) Is the given system causal? Your reasoning must be given.

5-21 For each gain function given below, determine whether it can be the gain of a
causal LTI system.
(a) i ~ , ( j o )=
l e-3wz
(b) IHb(jo)l =
(4I H c ( j o ) l = r(lol/W
(d) IHd(i0)l = 0.1 + r(lol/w)
5-22 Use the Parseval relation to determine the value of the integral,
I = Jrm
[1/(a2 o2)ldo.+
CHAPTER 6

THE BILATERAL LAPLACE


TRANSFORM

In the last two chapters, we observed some of the advantages of analyzing LTI
systems in the frequency domain. It is the convolution property of the Fourier
transform, Eqs. (5.5-20), that is the basis for many of these advantages because
equations that involve convolution in the time domain become algebraic equations
in the frequency domain. However, a difficulty is that the Fourier transform of
functions that are not L , may not exist. For example, if x ( t ) = u(t), the unit step,
then the Fourier transform integral, Eq. (5.1-2), diverges for o = 0. Thus we could
not work with such functions in the frequency domain using our development of the
Fourier transform. Also, a problem with which we shall be concerned is the stabi-
lization of unstable LTI systems. We could not analyze such problems using the
Fourier transform because the transfer function of an unstable system may not exist.

6.1 THE BILATERAL LAPLACE TRANSFORM

To extend the class of functions with which we can work in the frequency domain,
the Fourier transform is generalized. This generalization is called the bilateral
Laplace transform.’ As we shall see, the bilateral Laplace transform is just an
extension of the Fourier transform to a complex frequency plane. This extension
into a complex frequency plane will enable us to analyze the stabilization of unstable
systems. Also the bilateral Laplace transform enables one to develop a great deal of
insight and intuitiveness concerning LTI systems.

’ Pierre Shone de Laplace (1749-1827) was a protCgC of D’Alembert. Laplace made notable contribu-
tions to cosmology, propagation of sound, and probability.

175
176 THE BILATERAL LAPLACE TRANSFORM

Because the bilateral Laplace transform is a generalization of the Fourier trans-


form, we begin by restating the fundamental relations of Fourier transforms that we
have developed. We have from Section 5.1 that if a h c t i o n f ( t ) is absolutely
integrable (i.e. an L , function) so that

(6.1-la)

then the Fourier transform of the functionf(t),

F(jo)= 100

-cJ
f(t)e-'"' dt (6.1- 1b)

converges so that IF(jo)I < 00 for all values of o and alsof(t) can be retrieved
from F ( j o ) using the inverse Fourier transform,

(6.1- 1C)

As we discussed in Section 5.1, Eqs. (6.1-lb) and (6.1-lc) are called a Fourier
transform pair because if one equation is true, then so is the other. That is, if
F ( j o ) is obtained by Eq. (6.1-1b), thenf(t) can be retrieved by Eq. (6.1-lc) and
iff(t) is obtained by Eq. (6.1- 1c), then F ( j o ) can be retrieved by Eq. (6.1- 1b). From
the viewpoint of the Fourier transform being a mapping of functions f ( t ) into func-
tions F ( j o ) , the result that Eqs. (6.1-1) are a Fourier transform pair is the same as
stating that the mapping is one-to-one.
To extend the class of functions for which a transform exists, we must modify the
condition given by Eq. (6.1-la) which requires thatf(t) be an L , function. For this,
we define a function g(t) as

g(t) =f(t)e-"' (6.1-2a)

in which CT is a real constant which we choose so that g(t) is an L , function. That is,
we choose CT so that

I = 100

-00
Ig(t)l dt = 100

--oo
If(t)e-"'l dt < 00 (6.1-2b)

The exponential, e-"', is called a weighting function because the values off(t) are
"weighted" by it to make the integral, Eq. (6.1-2b), converge. We then have in
accordance with Eqs. (6.1-1) that the Fourier transform of g(t) converges so that
IG(jw)) < 03 for all values of w. Also, g(t) and G ( j o ) are a Fourier transform pair
for values of CT for which I in Eq. (6.1-2b) is finite.
6.1 THE BILATERAL LAPLACE TRANSFORM 177

In accordance with Eq. (6.1-1b), the Fourier transform of g ( t ) is

G ( j o )= 1 00

-00
g(t)e-jwtdt (6.1-3a)

Substituting Eq. (6.1-2a), we have for values of o that satisfy Eq. (6.1-2b)

G(jw) = 1 00

--oo
f (t)e-''e-iw' dt

The reason eP' was chosen as the weighting function is that e-"e-jw' = e-('+Jw)t,
so that this equation can be written as
00

G(jw)=
J
[ -00
f (t)e-('+iw)tdt (6.1-3b)

Now compare this last integral with that in Eq. (6.1-lb). Note that the only difference
is that (jo)in Eq. (6.1-lb) has been replaced by (o+jw). In Eq. (6.1-1b), the value
of the integral is F(jw). Thus, in accordance with the notation of Eq. (6.1-1b), the
value of the integral in Eq. (6.1-3b) is F ( o + j w ) . That is,

G ( j o ) = F ( o +jo) (6.1-3~)

Now define the complex variable s as

S = o+jw (6.1-4a)

We then can express Eq. (6.1-3b) in the form

F(s) = 1 CC

-00
f(t)e-" dt (6.1-4b)

The funtion, F(s), is called the bilateral Laplace transform of f ( t ) . The adjective
bilateral is used because the integration with respect to t is from -XI to +XIso that
it is over both (positive and negative) sides of the t axis. If the time function in Eq.
(6.1-4b) were the impulse response of an LTI system, h(t),then its transform, H(s),
is called the system function of the LTI system. That is,

H(s) = 100

-00
h ( r ) P ' dt (6.1-4~)

It is important to note that the only values of o that can be used in Eqs. (6.1-4b) or
(6.1-4c) are those values for which the integral, I, in Eq. (6.1-2b) is finite. We shall
discuss this restriction in the next section. For the moment, however, observe in Eq.
178 THE BILATERAL LAPLACE TRANSFORM

(6.1-2b) that if I < 00 for cs = 0, we then can lets = 0 + j w in Eq. (6.1-4~)to obtain
the transfer function

H(jo)= 1 00

-00
h(t)e-@" dt (6.1-4d)

so that H ( j o ) = H(S)~,=,~.
We thus observe that the transfer hnction is a special
case of the system function. The system hnction and its use in system analysis will
be discussed in later chapters. Before continuing, we shall do some illustrative
examples to fix the ideas developed to this point.

Example 1 As our first example, we determine the bilateral Laplace transform of


f , ( t ) = Ae-"'u(t) (6.1-5)

In this expression, a is a real number that can be either positive or negative. Note that
if a = 0, thenf,(t) = Au(t), so that the step function is a special case of this example.
The first step in determining the bilateral Laplace transform is to determine the
values of cs for which Eq. (6.1-2b) is satisfied. For this we have that

If,(t)e-"' I = I Ale-(a+a)'u(t) (6.1-6)

In Eq. (6.1-2b), I is the area under If,(t)e-"'l. This area is finite only if the exponent,
+
(a a), is greater than zero because, from Eq. (6.1-6), If,(t)e-"'I is zero for t < 0
+
and, for (a a) > 0, it is a decaying exponential for t > 0. Observe that the area is
+
not finite if (a a) 5 0. We thus have that, in Eq. (6.1-2b), I < 00 only for those
+
values of cs for which (a 0)> 0 or, equivalently, for a > -a.
The range of values of cs for which Eq. (6.1-2b) is satisjed is called the RAC of
f ( t ) . RAC is an abbreviation for the range of absolute convergence. That is, it is the
range of values of cs for which the integral of the absolute value of f,(t)e-"',
If,(t)e-"'l, converges. This requires the value of the integral, I , to be finite. Note
that it is not necessary to determine the actual value of I in Eq. (6.1-2b) because we
are only interested in determining whether I is finite. Thus the RAC off,(t) is
CS > -a.
We now can determine the expression for FJs) from Eq. (6.1-4b). For a > -a,

F,(s) = 1 00

-00
f,(t)e-" dt

(6.1-7)
6.1 THE BILATERAL LAPLACE TRANSFORM 179

The RAC must always be included in the expression for the Laplace transform of a
function.
Let us go through the details of evaluating the last integral in order to really
observe why we require a > -a for this example. For this, first note that 00 in an
integral just denotes a limit. That is,
integral in Eq. (6.1-7) is, in reality,
Jr
really stands for limT+m s,'. Thus the

(6.1 -8a)

For a limit to exist, the value of the function must approach a definite finite value.
For example, limt+,msin(wt) does not exist because as t increases, sin(wt) keeps
varying between +1 and -1 and thus does not approach a definite value. Also
limr+me-"sin(wt) does not exist if a 5 0. However, the limit does exist if a > 0
and the value of the limit is zero.
For our case, Eq. (6.1-8a), first note that the limit does not exist if a = -a
because then a + s =j w and so = e-jot. For this case, the integral in Eq.
(6.1-8a) is
A
,-jot dt = lirn -[1 - e-jwT] (6.1-8b)
T+, 00 T+mJO

Because e-JwTdoes not approach a definite value as T --+ 00, we have the limit in
Eq. (6.1-8a) does not exist for the case in which a = -a. We now examine Eq.
(6.1-8a) for a # -a. For this case, we obtain from Eq. (6.1-8a)

(6.1-8~)

To determine this limit we use the rectangular form of s as given by Eq. (6.1-4a) to
note that

Thus

In obtaining this last result, we have used from Appendix A that le-jotI = 1. Also,
the magnitude bars about e-('+a)T were removed because it is not negative. Thus we
note that the magnitude of +
grows without bound as T -+ 00 if (a a) < 0
+
so that the limit in Eq. (6.1%) does not exist for this case. However, if (a a) > 0,
then e-(a+u)Tgoes to zero as T 4 00. Thus we have the result that the limit in Eq.
+
(6.1-8c) exists only if (a a) > 0 or, equivalently, for a > -a. This is the RAC of
h(t)which we determined above. For a > -a, we have limT+me-(a+u)T= 0 so that
the limit in Eq. (6.1-8c) is Fa@) as given by Eq. (6.1-8a). The convergence of the
180 THE BILATERAL LAPLACE TRANSFORM

integral for the Laplace transform of a function is ensured if o is in the RAC of the
function.
We have obtained the result that the bilateral Laplace transform off,(t) given by
Eq. (6.1-5) is FJs) given by Eq. (6.1-7). This transform can be represented in the
complex s plane as shown in Fig. 6.1-1 for the case in which a < 0.
Fa@)in Eq. (6.1-7) is infinite for s = -a. Values of s at which Fa@)is infinite are
calledpoles of the Laplace transform and are denoted by an x as shown in Fig. 6.1-1.
The RAC (region of absolute convergence in the s plane) is o > -a, which is
indicated in Fig. 6.1-1 by the shaded region. Note that the constraint c > -a is
independent of w so that, for example, o = -a is the vertical line s = -a + j w for
all values of w in the s plane. Thus, in the s plane, the RAC is all of the s plane to the
right of the vertical line s = -a + j w .
Note that the pole at s = -a does not lie in the RAC. In fact, the RAC of any
function, f (t),cannot include anypoles of its transform,F(s), because F(s) is infinite
at a pole and F(s) cannot be infinite within the RAC off (t). The reason is that, for
values of o within the RAC off (t), we have from Eq. (6.1-3c) that F(s) = G ( j w )
and IG(jo)l < 00 because g(t) is an L, function in accordance with Eq. (6.1-2b).

Example 2 As our second example, we determine the bilateral Laplace transform


of
h ( t ) = BePbtu(-t) (6.1- 10)
In this expression, b is a real number that can be either positive or negative. Note that
this function is zero for t > 0 because, in accordance with our definition of the unit-
step h c t i o n given by Eq. (1.4-6), its value is one when the argument is positive and
zero when the argument is negative. Thus,
1 fort < 0
u(-t) = (6.1-11)
0 fort > 0

Consequently,

fb(t) = [ r-bt for t < 0


for t > 0
(6.1- 12)

Note that if b = 0, thenfb(t) = Bu(-t).

Fig. 6.1-1 The s-plane representation of Eq. (6.1-7) for a -= 0.


6.1 THE BILATERAL LAPLACE TRANSFORM 181

The first step in determining the bilateral Laplace transform is to determine the
values of CJ for which Eq. (6.1-2b) is satisfied. For this we have that
Ifh(t)e-"'l = IBledUfb)'u(-t).In Eq. (6.1-2b), I is the area under Ifb(t)e-"'l. This
area is finite only if the exponent CJ + b < 0 because, from Eq. (6.1- 12), Ifb(t)e-"'I is
zero for t > 0 and, for t < 0, Ifb(t)e-"'I decays exponentially to zero as t + -co if
(T+ +
b < 0. Observe that the area is not finite if CJ b 2 0. Thus we have that, in Eq.
+
(6.1-2b), I < co only for those values of CJ for which CJ b < 0 or, equivalently, for
CJ < -b. Thus the RAC (the region of absolute convergence) offb(t) is CJ < -b. As
in Example 1, note that it is not necessary to determine the actual value of I in Eq.
(6.1-2b) because we are only interested in determining whether I is finite.
We now can determine the expression for F&) from Eq. (6.1-4b). For rs < -b,

00

= J'
-cc
BeCb'ePsfu(-t)dt
(6.1 - 13)

J -cc

-
-B
- ~

CJ < -b
s+b'

To fully understand the evaluation of the integral in Eq. (6.1-13) and to really
understand why the RAC offb(t) is CJ < -6, you should go through the details of
evaluating the last integral in the same manner as in Example 1. This transform can
be represented in the complex s plane as shown in Fig. 6.1-2 for the case in which
b < 0. As in the first example, note that the pole at s = -b does not lie in the RAC,
which is indicated by the shading.

Example 3 To further illustrate the determination of the bilateral Laplace trans-


form of a function and to illustrate some techniques, we shall determine the

Fig. 6.1-2 The s-plane representation of Eq. (6.1-13) for b < 0.


182 THE BILATERAL LAPLACE TRANSFORM

transform of

f , ( t ) = Bepa' cos(w,t + d)u(t) (6.1 - 14)

where a is a real number that can be positive or negative. Figure 6.1-3 is a graph of
this function for B = 1, tl = 0.4/s, w,, = 8 rad/s, and 4 = 0. In order to emphasize
thatf,(t) = 0 for t < 0, the graph is plotted starting at t = -2.
As in our previous two examples, we first must determine the RAC off,(t). For
this, in accordance with Eq. (6.1-2b), we must determine those values of a for which

(6.1-1 5)

Now,

If,(t)e-"'I = IBe-('+'')' cos(w,t + 4)lu(t) (6.1- 16)

A graph of this function for our example is shown in Fig. 6.1-4. The maxima of the
humps have been connected by a line. Clearly, the line is an exponential with the
equation Be-("+")'u(t)because the maximum value of the cosine is one. The value of
the integral, I , in Eq. (6.1-15) is the sum of the areas of the humps. However, this
area clearly is less than the area under the exponential curve, which is finite if
+
(a 0)> 0. Consequently, I is finite if a > -a. Now, if (a + 0)= 0, then the
amplitude of each hump is the same so that Z is infinite because there are an infinite
number of humps, each with the same area. Consequently, I is infinite for r~ = -a.
+
Finally, if (a a) < 0, then hump amplitudes increase exponentially so that I is
clearly infinite for a < -a. In summary, we have shown that the integral, I , in Eq.
(6.1-15) is finite only if a > -a. Thus the RAC forf,(t) is a > -a. Note that it was
not necessary to evaluate the integral, I , in Eq. (6.1-15), because our only interest is
whether it is finite. This determination often can be made by choosing an appropriate
upper or lower bound as we did in this example. Note the RAC for our example is
independent of the frequency wo and also does not include 0 = -a.
We now can use Eq. (6.1-4b) to determine the expression for Fc(s) for 0 > -a.

Fc(s)= 1 cc

-cc
f,(t)ePs'dt
(6.1- 17)
= 5,; Bepatcos(w,,t + 4)e-s' dt

This integral is not easy to evaluate in its present form. The exponential, however, is
one of the functions that is easy to integrate. The integrand of our present integral
6.1 THE BILATERAL LAPLACE TRANSFORM 183

-1 ! d
-2 0 2 4 6 8 10
t i m e in s e c o n d s

Fig. 6.1-3 Graph of Eq. 6.1-14.

can be put into exponential form by expressing the cosine as the sum of exponentials
[see Eq. (A-14)]:

With this and using the exponential property, @eb = e(a+b),the integrand in Eq.
(6.1-17) can be expressed in the exponential form

Fig. 6.1-4 Graph of Eq. 6.1-16.


184 THE BILATERAL LAPLACE TRANSFORM

so that Eq. (6.1-17) is

To fully understand the evaluation of the integrals in Eq. (6.1-19) and to really
understand why the RAC off,(t) is o > -a, you should go through the details of
evaluating the integrals in the same manner as in Example 1.
The expression for Fc(s) in Eq. (6.1-19) can be put into a better form. For this,
first note that iff(t) is a real function, then, for s = o + j O , the integrand in Eq.
(6.1-4b) isf(t)e-"', which is a real function o f t so that the value of the integral,
which is F(s) with s = o +jO, must then be real. That is, i f f ( t ) is a real function of
t, then F(o) must be a real function o f o . Becausef,(t) is a real function o f t in our
present problem, F&) with s = o +jO must be a real function so that we must be
able to eliminate thej's in Eq. (6.1-19). To obtain the desired expression, we add the
two terms and use the exponential expressions for the sine and cosine functions
[Eqs. (A-14) and (A-l5)].

1 (s
F,(s) = - B
+ CI +jwO)eJ++ (s + CI -jco,)e-j+
2 (s + a>2+ Of
1 (s + .)(e'$ + e++) + j o , ( e j + e++)
-
=-B (6.1-20)
2 (s + a)2 + Of
(s + E) cos(4) coo sin(#)
-
=B , o>-a
(s + a)2 + o;

As special cases of our result, note that for 4 = 0 we have


f,, = Be-" cos(o,t>u(t) (6.1-21a)

for which
s+a
Fc,(s)= B o > -a (6.1-2 1b)
(s + + Of'
and for 4 = --71/2ra4 we have
f,,(t) = Bepatsin(w,t)u(t) (6.1-22a)
for which
WO
Fc,(s)= B o > -a (6.1-22b)
(s + a)2 + Of'
6.1 THE BILATERAL LAPLACE TRANSFORM 185

For the general case given by Eq. (6.1-20), there are two poles: One is at
s = -a + j o o and the other is at s = -a -jo,. Also there is one zero that, for
cos(4) # 0, is located at s = -a + m0 tan(4). Note that the pole locations are
conjugates of each other. This is a consequence of the fact that F(o) is a real function
iff ( t ) is a real function. In fact, generally, all poles and zeros that are not located on
the o axis must occur in conjugate pairs iff ( t ) is a realfinction oft. The reason for
this will be more clear when we discuss the s-plane in more detail in Chapter 9.
The transform, F,(s), can be represented in the complex s plane. The case in
which a > 0, 4 = n/4rad, and oo> a is shown in Fig. 6.1-5. As in our previous
examples, note that the poles do not lie in the RAC, which is indicated by the
shading. For the case illustrated, also note that the zero lies in the RAC. A zero
can lie anywhere; there is no restriction on its location. The only restriction is that a
pole cannot lie in the RAC.

Example 4 As our last example, we shall determine the bilateral Laplace trans-
form of

for t p 0 (6.1-23)
for t 2 0

where a and b are real numbers. Note that we can expressfd(t) in terms of the
functionsfa(t) andfh(t) of the first two examples in this section as

Fig. 6.1-5 The s-plane representation of FJs) for the case c( > 0, 4 = n/4, and wo > a.
186 THE BILATERAL LAPLACE TRANSFORM

for the case in which A = 1 and B = 1. In accordance with our previous examples,
we must first determine the RAC offd(t) which are the values of CT for which I < 00
in which

Z= 100

-cc
Ifd(t)e-"'I dt (6.1-25a)

By substituting we have

(6.1-25b)

From our discussion of the RAC off,(t) and offb(t), the value of the first integral in
Eq. (6.1-25b) is finite only for G < -b and the value of the second integral is finite
only for G > -a. For I to be finite, we require both integrals to be finite so that we
require G < -b and also G > -a. Combining these two inequalities, we require

-a < G < -b (6.1-26)

This is the RAC offd(t). Note that this requires b < a. This means that the bilateral
Laplace transform offd(t) does not exist if b ? a. You should make some drawings
offd(t) and Ifd(t)e-"'I for cases in which b < a and in which b 2 a and note that the
area under Ifd(t)e-"'I can be made finite only for the case in which b < a by
choosing a value of G between -a and -b.
For the case in which Eq. (6.1-26) is satisfied, we then have

Fd(s)= 1
00

fd(t)ePstdt
-cc

e-bt e-st dt + Jr e-ate-st dt (6.1-27)

-1 1 b-a
-
- -+--
s+b s+a
-
(s+a)(s+b)'
-U < G < -b

Figure 6.1-6 is the s-plane representation of Fd(s) for the case in which a > 0 and
b < 0 with the RAC indicated by the shaded region. Note that the RAC is the region
between the two pole but does not contain the poles.
The examples given in this section illustrate the basic direct techniques for
determining the bilateral Laplace transform of a function. The basic techniques
shown, however, can be somewhat tedious and lend no real insightful understanding
of the transform. For this, we shall first examine some properties of the RAC and
then some properties of the transform. These results then will be used to gain a better
understanding of the bilateral Laplace transform and to simplify its determination.
6.2 SOME PROPERTIES OF THE RAC 187

Fig. 6.1-6 The s-plane representation of F&) for the case in which a =- 0 and b < 0.

6.2 SOME PROPERTIES OF THE RAC

In the last section, the RAC of any function,f(t), was defined to be those values of CJ
for which

(6.2-1)

and the determination of the RAC of some functions was illustrated. Their determi-
nation was seen to require some effort. However, there are some properties of the
RAC which simplify its determination. Furthermore, some important properties of
the function,f(t), can be determined directly from its RAC. We shall discuss some
of these properties in this section because they will be important in our later
discussions.

6.2A The Time-Shifting Property of the RAC


The time-shifting property of the RAC is that the RAC of a function is unaffected by
a time shift of the function. That is, for any value of to,the RAC off(t) andf(t - to)
are the same. To show this, we have that the RAC off(t - to)are those values of CJ
for which

I = I-,
00

If(t - to)e-"' dt < 00 (6.2-2a)

We now let z = t - to. With this change of variable in the integral, we obtain

(6.2-2b)
188 THE BILATERAL LAPLACE TRANSFORM

NOW e-41+'0) = e-Ge-OT . Observe that 0 < e-"'O < 00. Also, because the integra-
tion is with respect to t, the factor ePufOis a constant during the integration. Thus we
can express the integral in Eq. (6.2-2b) as

(6.2-2~)

for any value of B in the RAC off ( t ) . Because e-"'O < 00, we conclude that

(6.2-3)

for any value of B in the RAC off (t - to).But values of B for which Eq. (6.2-3) is
satisfied are values of B in the RAC off ( t ) .Thus we have shown that the RACs of
f ( t ) and f ( t - to) are identical. Physically, this property states that the RAC of a
function does not depend on the point we call t = 0; it is depends only on the shape
of the function.

6.2B The Interval Property of the RAC


The interval property is that the RAC of any function always will be B , < B < c2.
That is, the RAC of any function always will be an interval of the B axis and not a
disjointed set of intervals. For example, it is not possible for the RAC of some
function to be -2 < < 1 and also 3 < B < 8 without 1 5 B 5 3 also being in
the RAC of the function. To show this, we show that if B , and g 2 are values of B
which lie in the RAC of a function f (t), then any value of B which lies between B ,
and g 2 also lies in the RAC off (t).For this, let o0 be a value of B which lies between
B , and o2 so that B , < c0 < 0 2 .To show that go lies in the RAC, we now must
show, in accordance with Eq. (6.1-2b), that

(6.2-4a)

First express this integral as one over negative values o f t plus one over positive
values o f t as

--bo
If (t)e-"OfI dt = J
-00
If (t)e-'O'l dt + J0 If (t)e-"Ofldt
(6.2-4b)
6.2 SOME PROPERTIES OF THE RAC 189

Now, because no < 0 2 , note that ec'o' < e-'2' for negative values o f t so that
If (t)eCUn'1< If (t)e-'2'1 for negative values of t. Thus,

I, = 10

-00
1f(t)e-"n'Idt < 10

-00
If (t)e-'2'I dt < 00 (6.2-5a)

I, < 00 because

If (t)e-Q'I dt 5 1
03

-00
If (t)e-"2'1 dt < 00 (6.2-5b)

We obtain the first in equality in Eq. (6.2-5b) by noting that the second integral
equals the first integral plus the area under the curve for positive values o f t . The
second integral is finite because o2 lies in the RAC off (t).Thus we have shown that
I , < 00. We now show that I2 < 00 by first noting that e-'o' < e-'I' for positive
values of t because oo > 0,. Consequently, If (t)e-"o'I < If (t)e-"l'I for positive
values o f t . Thus,

l2= loIf00

(t)e-'o' I dt < loIf


00

(t)e-"l'I dt < 00 (6.2-6a)

I2 < 00 because

1; If(t)e-"I'I dt 5
00

-m
If (t)e-"l'I dt < 00 (6.2-6b)

We obtain the first inequality in Eq. (6.2-6b) by noting that the second integral
equals the first integral plus the area under the curve for negative values o f t . The
second integral is finite because o, lies in the RAC of f ( t ) . Thus we show that
I, < 00. With the use of Eq. (6.2-4b), we have shown the correctness of Eq. (6.2-4a)
because we have shown that I, < 00 and I, < 00. Thus o = oo also lies in the RAC
o f f (t).
Because a pole cannot lie in the RAC, an immediate consequence of the interval
property is that the RAC of a function cannot be on both sides of a pole. For
example, consider the RAC of &(t) in Section 6.1. Its RAC is between the two
poles at -a and 4.Its RAC could not be, say, b < o < a and also o > a because
there is a pole at o = -a which, as we discussed, cannot lie in the RAC. Thus we
note that the RAC of a function,f( t ) , is an interval between the poles of its trans-
.form,m).
190 THE BILATERAL LAPLACE TRANSFORM

6.2C The RAC of Functions Which Are Zero for t < O 2


We have shown that an LTI system is causal if and only if its unit impulse response,
h(t), is zero for t < 0. For this reason, properties of functions that are zero for t < 0
are very important in system theory. One important property of such functions is that
if cro is a value of cr which lies in the RAC off($ then all values of cr greater than cro
also lie in the RAC off(t). Because we have just shown that the RAC of any
function is an interval between the poles of its transform, this property implies
that the RAC of a function, f ( t ) , which is zero for t < 0 is to the right of the
most right-hand pole of its transform, F(s). That is, all the poles of F(s) lie to the
left of the RAC off(t). The transforms off,(t) andfc(t) determined in Section 6.1 are
examples of this property. To show this property, we have that for cr = cro

The two integrals in Eq. (6.2-7a) are equal becausef(t) = 0 for t < 0 so that the
integral over negative values oft is zero. The value of the integral is finite because cro
lies in the RAC off(t). Now let crl > cro. We then note that for positive values of t,
< e-" 0 f so that If(t)e-"l'l < 1f(t)e-"ofI. Consequently,

so that cr, lies in the RAC. Because cr, was any value of cr larger than cro, we
conclude that the RAC must include cr 2 go.
Note that we have only shown that the RAC is to the right of all the poles if
f ( t ) = 0 for t < 0. We cannot conclude that the function,f(t), is necessarily zero for
t < 0 if the RAC is to the right of all the poles. As an example, considerf,(t 2). +
This is the functionf,(t) discussed in Section 6.1 advanced by 2 s. We then have that
+
f , ( t 2) # 0 for -2 5 t 5 0. However, by the time-shift property of the RAC, the
RAC off,(t - to) is the same as the RAC off,(t), which is cr > --a.

6.2D The L, Property of the RAC


One property off(t) that can be determined directly from its RAC is whetherf(t) is
an L , function. This property is of importance because we have shown in Section 3.6
that it is a necessary and sufficient condition forf(t) to be able to be the unit-impulse

'A function that is zero for f i0 is sometimes called a causal function in the literature because such a
function can be the impulse response of a causal LTI system. To avoid confusion, I do not use that
terminology because causality means that there is a causal relation as we discussed in Section 3.5. A
function, however, is just a mapping as discussed in Section 1.4 and so there is no causality concept
involved.
6.3 SOME PROPERTIES OF THE BILATERAL LAPLACE TRANSFORM 191

response of a BIBO-stable LTI system. From Eq. (6.2-l), the RAC of f ( t ) is those
values of (T for which

(6.2-8)

Iff ( t ) is an L , function so that

(6.2-9)

then the RAC includes (T = 0 because Eq. (6.2-9) is Eq. (6.2-8) with (T = 0. From
Eq. (6.2-8), we further note that if (T = 0 lies in the RAC, then f ( t ) is an L , function.
That is, f ( t ) is an L , function if and only if (T = 0 lies in the RAC o f f ( t ) . In the s
plane, (T = 0 is the w axis because it is the vertical line s = 0 +jo.We thus have the
result that f ( t ) is an L , function ifand onZy $ in the s plane, the o axis lies in the
RAC off ( t ) . Consequently, because an LTI system is stable if and only if h(t) is an L ,
function, we have the important result that an LTI system is stable ifand only ifthe w
axis lies in the RAC of the system function, H(s), given by Eq. (6.1-4c).
From Eq. (6.1-5), it can be seen that the functionf,(t) is not an L , function for
a 5 0. This same result can be obtained directly from Eq. (6.1-7), from which we
have that the RAC off,(t) is (T > -a so that the RAC off,(t) does not include (T = 0
if a 5 0. Note from the s-plane representation of Fa@)shown in Fig. 6.1-1 that the
RAC does not include the w axis for the case a < 0. If a > 0, thenf,(t) is an L ,
function and we note from the s-plane representation shown in Fig. 6.1-1 that the o
axis is included in the RAC in accordance with the L , property of the RAC.

6.3 SOME PROPERTIES OF THE BILATERAL LAPLACE TRANSFORM

Similar to the properties of the Fourier transform discussed in Chapter 5, the various
properties o f the Laplace transform not only enable us to obtain the Laplace trans-
form of a function more easily, but also enabled us to obtain a deeper understanding
of the Laplace transform and its applications. We expect the Laplace transform
properties to be similar in form to their Fourier transform counterparts because, as
discussed in Section 6.1, the Fourier transform is a special case of the Laplace
transform.

6.3A The Linearity Property


Let the Laplace transform of f i ( t ) be F,(s) with the RAC oa < (T < ob, let the
Laplace transform of &(t) be F2(s) with the RAC (T, < (T < rsd, and let
+
( t ) C2&(t). Then, if the RACs of fi ( t ) and &(t) overlap, the Laplace
f ( t ) = C,fi
+
transform off'(t) is F(s) = C,F,(s) C2F2(s)with the overlap Zying in the RAC of
f (4.
192 THE BILATERAL LAPLACE TRANSFORM

Before proving this property, consider &(t) whose transform we determined in


Section 6.1. We were able to express&(t) as the sum offa(t) andf,(t) as given by Eq.
(6.1-24). The RAC offa(t) is 0 > -a as shown in Fig. 6.3-la, and the RAC offb(t)
is o < -b as shown in Fig. 6.3-lb. The case illustrated is for b < a. Note that the
overlap is -a < o < -b, which is the RAC Offd(t) we determined. We obtained
+
Fd(s) = F, (s) F2(s) in accordance with the linearity property stated above. Also
note that there is no overlap of the two RACs if b 2 a and the Laplace transform of
fd(t) does not then exist.
To show the linearity property, we use the inequality obtained in Appendix A:

Iz1 +z2I i Iz11 + 1221 (6.3-1)

where z1 and z2 are complex quantities. With the use of this inequality, we have

so that

I = 1 00

-00
If(t)e-"'I dt i 1 00

00
Ifi(t)e-"'I dt + 1
00

-00
Ih(t)e-"'I dt (6.3-3)

The RAC are those values of g for which I < 00. Now if the RACs offi(t) andf,(t)
overlap then, for values of n in the overlap, both integrals in Eq. (6.3-3) are finite so
that I < 00. The overlap is thus contained in the RAC off(t). For values of o in the
overlap, we then have
00

f(t)e-"' dt = 1
00

-00
[Clfi( t ) + C2f2(t)]e-"'dt
= Cl 1
00

-00
fi(t)e-"' dt + C2 100

-00
fi(t)e-"' dt (6.3-4)

= CIFI(s) + C2F2(s) for o in the overlap of the RACs offi(t) andf,(t)

Note that the condition for the linearity property to hold is that the RACs of the
two functions being summed overlap. This is physically reasonable because the only
values of s that can be used in F , (s) are those in the RAC of f i ( t ) and, similarly,

Fig. 6.3-1 (a) RAC off,(t); (b) RAC offb(t).


6.3 SOME PROPERTIES OF THE BILATERAL LAPLACE TRANSFORM 193

the only values of s that can be used in F2(s)are those in the RAC off2(t). Because
the same value of s must be used in each of the three functions in Eq. (6.3-4), the
value of s used must lie in the RAC of each function, which means that it must lie in
the overlap of the RACs. If the RACs of fi ( t ) and f2(t) do not overlap, then Eq.
(6.3-4) is not valid. However, in such cases, it still is theoretically possible for the
+
Laplace transform of f ( t ) to exist. For example, let fi ( t ) = 1 = u(t) u ( - t ) and
h(t)= -u(-t). The Laplace transform of f i ( t ) for this example does not exist
because it has no RAC (observe from Section 6.1 thatfi ( t ) =fd(t) with a = 0 and
b = 0). The Laplace transform offZ(t) does exist, and its RAC is a < 0 (note from
Section 6.1 that h(t)=fb(t) with b = 0). However, f ( t ) =fi ( t )+f2(t) = u(t). Its
Laplace transform does exist (note from Section 6.1 that u(t) = f , ( t ) with a = 0)
and its RAC is a > 0. The linearity property only concerns functions that have
overlapping RACs.

6.3B The Scaling Property


Let the Laplace transform offi(t) be F,(s) with the RAC aa < a < ab. Then the
Laplace transform of

f(0= f i ( c t ) (6.3-5a)

is

(6.3-5b)

To obtain this property, we first must determine the RAC off(t). In accordance with
Eq. (6.1-2b), we must determine the values of a for which I < 00 in which

I = 1
00

-00
If(t)e-"'l dt = 1 00

-00
Ifi (ct)e-"'l dt (6.3-6a)

To express the second integral in the form of the RAC offi (t),we make the change
of variable, z = ct in the second integral. Then, in a manner similar to that in Section
5.5, we obtain

(6.3 -6b)

Except for the constant l/lcl, we recognize the integral as that for the RAC offi(t)
but with a replaced with a/c. Because the RAC off ( t ) is aa < a < ab, we conclude
that I in Eq. (6.3-6b) will be finite for aa < a/c < ab.This then is the RAC off(t) as
given in Eq. (6.3-5b). We now can determine F(s) for a in this range.
00 00

~ ( s=
)
JL f(t)e-"' dt = J -00
f i ( c t ) e P dt (6.3 -7)
194 THE BILATERAL LAPLACE TRANSFORM

To express the second integral in the form of the Laplace transform offi (t),we make
the change of variable, z = ct in the second integral. Then in a manner similar to the
proof of the Fourier transform scaling property in Section 5.5, we obtain

(6.3-8)

The integral is recognized as the Laplace transform offi ( t ) with s replaced with s/c
so that we have

(6.3-9)

which is Eq. (6.3-5b).


A special case of interest is that for which c = -1. For this special case, we
obtain that the Laplace transform of

is

F(s) = F , (-S) (6.3-lob)

with the RAC aa < -o < o b or, equivalently, -ob < o < -oa.
As an example of this result, letfi ( t ) =f,(t) wheref,(t) is defined in Section 6.1.
Then

f ( t ) =f,(-t) = ea'u(-t) (6.3-1 la)

With the use of the scaling property, we then have

1 -1
F(s) = -- , ~ o t a (6.3-1 lb)
-s+a s-a

Observe from Section 6.1 thatf(t) =fb(t) with b = --a. With this substitution in Eq.
(6.3-1 lb), note that we do obtain the transform forfb(t).

6.3C The Time-Shift Property


Let the Laplace transform off,(t) be F,(s) with the RAC oa < o < ab. Then the
Laplace transform of

is
6.3 SOME PROPERTIES OF THE BILATERAL LAPLACE TRANSFORM 195

The RAC off(t) is the same as that offi(t) as a consequence of the time-shift
property of the RAC discussed in Section 6.2. To obtain the expression for F(s), we
have

f(t)e-"' dt = .[
00

-'x
f i ( t - to)e-" dt (6.3- 13a)

This can be put in the form of the Laplace transform offi(t) by the change of
variable 7 = t - to with which we obtain

(6.3- 13b)

This is Eq. (6.3-12b) so that the desired property is proved.


As a simple example of this result, let fi ( t ) = f , ( t ) in which &(t) is defined in
Section 6.1 and let to = -2 so that

In accordance with the time-shift property we then have

D > -a (6.3-14b)

Note that the RAC is to the right of the pole of F(s) but f ( t ) # 0 for t < 0. Later,
we'll determine conditions on F(s) from which we can determine whetherf(t) = 0
for t c 0. As stated previously, such a function is important in our study because it
can be the impulse response of a causal LTI system.

6.3D The Frequency-Shift Property


Let the Laplace transform offi(t) be F,(s) with the RAC D~ < D < ob. Then the
Laplace transform of

in which so = +jwo is

F(s) = F , (s - so), 0, + Do < D < Ob + Do (6.3- 15b)


196 THE BILATERAL LAPLACE TRANSFORM

This property is called a frequency-shift property because the Laplace transform of


f i ( t ) has been shifted by an amount so. To obtain this property, our first task is to
determine the RAC off(t). In accordance with Eq. (6.1-2b), we must determine the
values of f~ for which I < 00, where

z= 100

-'x
If(t)e-"'I dt = 1
03

-00
Ifi(t)e-so'e-u'I dt (6.3-16a)

(6.3-16b)

Equation (6.3-16b) was obtained with the use of the relation

which is obtained using results from Appendix A. Observe that the integral in Eq.
(6.3-16b) is that for the RAC offi(t) but with f~ replaced with (r - o0. We thus
conclude that the RAC o f f ( t )is o0 < f~ - go < bb, which is equivalent to that given
in Eq. (6.3-15b). With CT in this interval, we then have that the Laplace transform of
f ( t ) is

This is Eq. (6.3-15b), as was to be shown.


As an example, letfi(t) = u(t), for which F,(s) = l/s with r~ > 0. We then can
obtain the Laplace transform off(t) = e-"u(t) because this is Eq. (6.3-15a) with
so = -a. With the use of the frequency-shifi property, Eq. (6.3-15b), we have that
+ +
F(s) = F,(s a) = l/(s a) with f~ > -a.

6.3E The Convolution Property


Some of the important uses of the convolution property of Fourier transforms were
discussed in Chapter 5. We expect a similar, but more useful, Laplace transform
property because the Fourier transform is a special case of the Laplace transform.
Let
6.3 SOME PROPERTIES OF THE BILATERAL LAPLACE TRANSFORM 197

Let the Laplace transform offi(t) be F,(s) with the RAC a, < a < ab, and let the
Laplace transform off2(t) be F2(s)with the RAC ac < a < ad.We shall show that if
the RACs offi(t) andf,(t) overlap, then the Laplace transform off(t) is

with CT in the overlap of the RACs offi(t) andfi(t). Before proving this property,
note that it is what we intuitively would expect because if a = 0 lies in the RAC of
f i ( t ) and offZ(t), then we can let a = 0 in Eq. (6.3-19b) to obtain Eq. (5.5-20b), the
convolution property of Fourier transforms. Further since Eq. (6.3-19b) involves
both F,(s) and F2(s), the equation can be valid only in the overlap of the RACs
of both functions for the same reason given following Eq. (6.3-4).
We shall prove this property by a somewhat different procedure than that used
previously; we shall simply obtain the expressions for F(s) and note the values of a
for which the expressions are valid. To begin, the integral expression for F(s) is

F(s) = 1 W

-W
f(t)e-"' dt (6.3-20a)

Substituting Eq. (6.3-19a), we obtain

F(s) = /:_",[J'" -W
fi(z&(t - z) dz]e-" dt (6.3-20b)

The integration in this double integral is first with respect to z and then with respect
to t. Let us interchange the order of integration by first integrating with respect to t
and then with respect to The double integral then is

F(s) = 1w

-_",
fi(.,[.r"
-W
f2(t - z)eFsfdt] d z (6.3-20~)

From the time-shift theorem, we note that the value of the integral in the brackets is
F2(s)eCS7for a in the RAC off,(t). Thus Eq. (6.3-2Oc) is

F(s) = F2(s) fi (z)ePsTdr (6.3-20d)


-_",

The function F2(s)has been factored out of the integral because it is a constant in the
integration with respect to z. The value of the integral is recognized to be F , (s) for a

For u in the overlap, the double integral converges absolutely, which is sufficient to ensure that the
interchange of the integration order used to obtain Eq. (6.3-2Oc) is valid.
198 THE BILATERAL LAPLACE TRANSFORM

in the RAC offi(t). Thus for u in the RAC offi(t) andf,(t), which is the overlap of
the RACs, we obtain

F(s) = Fl (Wm (6.3-20e)

which is Eq. (6.3-19b). We shall use and illustrate this property extensively in
subsequent chapters.

6.3F The Time-Differentiation Property


Let the Laplace transform off,(t) be F,(s) with the RAC < 0 < 06. Then the
Laplace transform of

(6.3-2 1a)

is

We shall have important applications for this property in subsequent sections.


As in the proof of the convolution property, we prove this property by obtaining
the expression for F(s) and note the values of u for which the expressions required to
obtain F(s) are valid. We begin with the integral expression for F(s):

F(s) = 100

--oo
f(t)e-" dt, cra < 0 < o

(6.3-22)
= 100

-00
f;'(t)e-"' dt

To express this integral in the form of the Laplace transform offi (t), we integrate by
parts as we did in obtaining the Fourier transform time-differentiation property in
Section 5.5. From Eq. (5.5-23), the integration by parts formula is

(6.3-23)

To integrate the integral in Eq. (6.3-22) by parts, we choose

= e-"' and dv =fi'(t) dt

for which

du = -se-" dt and v =f i(t)


6.3 SOME PROPERTIES OF THE BILATERAL LAPLACE TRANSFORM 199

We then have for Eq. (6.3-22)

F(s) =f;(t)e-"lw
-cQ
- 1
w

-m
(-s)fi(t)e-" dt (6.3-24)

The value of the first term is zero for 0 in the RAC offi(t). To see this, note that

because le-jw'I = 1. Now, for 0 in the RAC off,(@

(6.3-26)

in accordance with Eq. (6.1-2b). Consequently, lim,,,, Ifi (t)e-"'l = 0 because


otherwise the area under Ifi(t)e-"'I would be infinite. Thus, from Eq. (6.3-25),
limf,ltcQ Ifi(t)e-"'I = 0 so that the first term in Eq. (6.3-24) is zero. For 0 in the
RAC offi(t), the integral in the second term of Eq. (6.3-24) is recognized to be equal
to -sF,(s). Substituting this value in Eq. (6.3-24) results in Eq. (6.3-21b), which
was to be shown.
To illustrate one application of this property, we shall determine the Laplace
transform of the exponential,

f,(t) = A e P u ( t ) (6.3-27a)

which we determined as Example 1 in Section 6.1. The derivative of this function,


obtained in Section 5.8, Eq. (5.8-8), is

f,'(t) = A6(t) - Aae-"u(t) = A6(t) - af,(t) (6.3-27b)

In accordance with the time-differentiation property, the bilateral Laplace transform


of this equation is

sF,(s) = A - aF,(s) (6.3-27~)

This is an algebraic equation from which we obtain the algebraic expression for
F,W.
A
Fa@) = s+a (6.3-27d)

The RAC off,(t) cannot be determined by this procedure; it must be determined


separately. However, for the present case, we can use the result obtained in Section
6.2 concerning the RAC of functions that are zero for t < 0. From that result, the
RAC offa(t) must be to the right of all the poles of F,(s). Because there is only one
200 THE BILATERAL LAPLACE TRANSFORM

pole of F,(s) which is at s = --a, we conclude that the RAC off,(t) must be 6 > -a.
This is the result obtained previously, Eq. (6.1-7).

6.3G The Frequency-DifferentiationProperty


The last important property we discuss is a dual of the time-differentiation property.
Let the Laplace transform o f h ( t ) be Fo(s) with the RAC B, < 6 < 6 b . Then the
frequency-differentiation property states that the Laplace transform of
f ( t > = th(t> (6.3-28a)

is

We show this by starting with the integral expression for Fo(s):

Fo(S) = / 03

-00
h(t)e-" dt, 6, < 6 <6b (6.3-29a)

We assume that the derivative of Fo(s) exists. Then, differentiating, we obtain


00
d
F(s) = --Fo(s)
ds = -&/
ds -cc h(t)e-"' dt, oa < 6 < g b (6.3-29b)

The order of differentiation and integration can be interchanged if F(s) is contin-


uous. With this condition, we then obtain

F(s) = -
rm
: -h(t)eO' dt

=- / 00

-cc
(-tlfo(t)e-"' dt = / 00

-cc
&(t)eKJt dt, cra < 6 < 6 b
(6.3-29~)

The last integral is recognized as the Laplace transform of &(t), which was to be
shown.
As an application of this property, let h(t)= AeP'u(t>. From Eq. (6.1-7), the
+
Laplace transform of this function is Fo(s)= A / ( s a ) with the RAC 6 > --a. Thus
we have with the use of the frequency-differentiation property that the Laplace
transform of

f i ( t ) = &(t) = Ate-"u(t) (6.3-30a)

is
d d A A
F,(s)= --Fo(s) = -- -- - (6.3-30b)
ds ds s a (s + ~

+ a)* ' 6 > --a


6.3 SOME PROPERTIES OF THE BILATERAL LAPLACE TRANSFORM 201

and the Laplace transform of

h(t)= tfi(t) = A?e-"u(t) (6.3-3 la)

is

d d A 2A
F2(s)=--F*(s)=-- r~ > -a (6.3-31b)
ds ds (s + a)2 - (s + a)3 '
By continuing in this manner, it is easily shown that the Laplace transform of

f , ( t ) = At"e-"u(t), n = 0, I , 2 , 3 , . . . (6.3-32a)

is

An!
F h )= r~>-a, n = 0 , 1 , 2 , 3 ,... (6.3-32b)
+
(s a)@+')'

6.3H Concluding Example


The various properties developed in this chapter can be used together to greatly
simplify the determination of the Laplace transform of a function. For example,
consider the determination of the Laplace transform of the triangular waveform
shown in Fig. 6.3-2. We could obtain the Laplace transform of this function by
direct integration. However, we note that this waveform is composed of a sequence
of straight lines. Knowing that the sum of straight lines is a straight line, we instead
begin by defining the function

This function is a ramp with a slope equal to one. In terms of the ramp, we then can
express f ( t ) as

(6.3-34)

Fig. 6.3-2 Graph of the triangular waveform, f ( t ) .


202 THE BILATERAL LAPLACE TRANSFORM

You should verify the correctness of this expression by drawing a graph of each term
of this expression and also the sum of the graphs. For this, show that the sum of two
straight lines is a straight line with a slope equal to the sum of the slopes of the two
lines added.
Now, with the use of the time-shift property and the linearity property, we have
that

A
F(s) = - [G(s) - 2G(s)e-” + G(s)eP2*’]
T
A
= - G(s)[1 - 2e-TSe e-2Ts]
T
+ (6.3-35)

A
=-G(s)[l -e 3
-Ts 2
T

To obtain G(s), we have shown that the Laplace transform of u(t) is l / s with the
RAC o > 0. Thus from the frequency-differentiation property, we have

d l 1
G(s) = -- - = - o>o (6.3-3 6)
ds s s2’

Note that G(s) also can be obtained from Eq. (6.3-32) for the case n = 1, a = 0, and
A = 1 . Substituting the expression for G(s) into Eq. (6.3-35), we obtain

(6.3-37)

This is the expression for F(s). But what is the RAC? The RAC of Eq. (6.3-35) is
c > 0 because that is the RAC of G(s). This can be seen by noting that the expres-
sion involves G(s) so that the only allowed values of o are those that lie in the RAC
of g(t). However, once the expression for G(s) is substituted, the RAC could be
larger. We could determine the RAC by actually determining the values of o for
which I [in Eq. (6.2-1)] is finite. However, the RAC off(t) also can be determined
for our example by making use of some properties of the RAC determined in Section
6.2.
First, becausef(t) = 0 for t < 0, we use the property that the RAC must be to the
right of all the poles of F(s). Now it would appear from Eq. (6.3-36) that there is a
pole of F(s) at s = 0. However, things are not always as they appear at first glance.
Using the power series expansion of the exponential, Eq. (A-lo), we have

(6.3-3 8)
PROBLEMS 203

By substituting this expansion into Eq. (6.3-37), we then obtain

"[ 2'!
F ( s ) = - T - - T 2 s + - T 31 s 2 - - T s
T 3! 4!
l2
+... (6.3-39)

Note that h+,, F(s) = A T so that there is no pole at s = 0. The reason is that
[ 1 - e-Ts]has a zero at s = 0 which cancels the pole there. In fact, there are no poles
in the finite s plane so that the RAC must be the whole s plane. That is, the RAC of
f ( t ) is -00 < (T < 00. Thus, the Laplace transform off (t) is

(6.3 -40)

Another method by which the RAC can be determined is to note from Fig. 6.3-2
that

Thus, from the L , property of the RAC we have that (T = 0 must lie in the RAC. We
immediately conclude from this that there is no pole of F(s) at (T = 0 because we've
discussed, a pole cannot lie in the RAC. Thus we observe from Eq. (6.3-40) that F(s)
has no poles. Thus the RAC must be -00 -= (T < 00 because, from our discussion of
the interval property of the RAC, the RAC is an interval between adjacent poles.
Note how use of the properties of the RAC and the Laplace transform greatly
simplified the determination of the transform of a function. Because (T = 0 lies in the
RAC off(t), we can let s = 0 in the expression for F(s), Eq. (6.1-4b), to obtain
00

(6.3-41)

Thus, if (T = 0 lies in the RAC off(t), then F(0) equals the area under f (t). You
should verify this result for our example.

PROBLEMS

6-1 The RAC for the hctionfd(t) of Example 4 in Section 6.1 was shown to be
< (T < -b.
Verify this result by sketchingfd(t) for a = 2 and b = 0 and then showing
that the area under Ifd(t)e-"' I is finite only if -2 < < 0.
Show that for the case a = 0 and b = 2 there is no value of s for which
the area under Ifd(t)e-"'I is finite so that the Laplace transform offd(t) for
this case does not exist.
204 THE BILATERAL LAPLACE TRANSFORM

6-2 Show that the Laplace transform off (t) = Ad(t) is F(s) = A with the RAC
- 0 0 < 0 < 0O.

6-3 The unit impulse response of a given LTI system is h(t) = b(t - to) +
e-"u(t).
(a) Determine H(s), the Laplace transform of h(t). Do not forget to specify
the RAC.
(b) If the value of a is such that the given system is stable, what would be the
system transfer function?

6-4 (a) Determine F(s), the Laplace transform of f(t) = cos(o,t)u(t). Do not
forget to determine the RAC.
(b) Sketch the s-plane pole-zero diagram of the function F(s) and specify the
location of each pole and zero.

6-5 +
The unit impulse response of a given LTI system is h(t) = [l e-*']u(t).
(a) Determine H(s), the Laplace transform of h(t). Do not forget to specify
the RAC.
(b) Is the given system stable? Give a short time-domain and an s-plane
statement of your reason.
(c) Is the given system causal? Give a short statement of your reason.

6-6 (a) Determine the bilateral Laplace transform off (t) = e-31r1.
Do not forget
to specify the RAC.
(b) Could f (t) be the unit-impulse response of a stable LTI system? Give a
short time domain and an s-plane statement of your reason.
(c) Could f(t) be the unit-impulse response of a causal LTI system? Give a
short statement of your reason.

6-7 Letf,(t) = u(t) andf,(t) = -u(-t). Because the two functions are not equal,
their transforms should not be equal. How do they differ?

6-8 Let f(t) = 0 for t > 0. Show that all the poles of F(s) lie to the right of the
RAC off (t).

6-9 The RAC of the function shown in Fig. 6.3-2 was determined to be
-00 < 0 < 00. This is a special case of a general result, which is: The
RAC of any bounded function, f (t), which is nonzero only over ajnite range
oft, t, < t < t2, is -00 < (T < 00. This means that the RAC of such a
function includes the whole s plane. Prove this general result.

6-10 Go through the details of the derivation of Eq. (6.3-9) to show that the
constant after the equal sign is l/lcl.
PROBLEMS 205

6-1 1 Use the result given by Eq. (6.1-20), and use the scaling property to
determine the bilateral Laplace transform off(t) = Beat cos(o,t - O)u(-t).

6-12 Obtain the result given by Eq. (6.3-14b) by direct integration.

6-13 Determine the Laplace transform off(t) = (1 - t)r(t)by using the difientia-
tion property.

6-14 Use the differentiation property to determine the Laplace transform of


f ( t ) = cos(o,t)u(t).

6-15 We have shown that the Laplace transform offi ( t ) = u(t) is F , (s) = 1/s with
the RAC c > 0. Use this result and the frequency-shif?property to obtain the
Laplace transform off(t) = Be-" COS(W,~ + $)u(t).
6-16 Use the frequency-differentiation property and the transforms obtained in the
text to determine the bilateral Laplace transform of the following functions:
(4fa@) = tu(t)
(b) fb(0= tcos(oot>u(t>
(c) L(t)= W / T )

6-17 Let g(t) = r ( t / T ) .


(a) Use the convolution integral to determinef(t) = g(t)*g(t).
@) Use the convolution property of the Laplace transform to determine F(s).

6-18 A technique that can be used to determine the Laplace transform of some
functions is to obtain a differential equation of which the transform is a
solution and then solve the differential equation for the transform. Some of
the examples in Section 6.3 used this technique. In this problem, we'll further
illustrate this technique by determining the Laplace transform of
f ( t ) = e-(@)+,c1 > 0.
(a) First show that the RAC of,f(t) is -00 < c < 00.
+
(b) Show that f ( t ) satisfies the differential equation f ' ( t ) crtf(t) = 0.
(c) Use the properties in Table 7.4-2 to show that the Laplace transform of
f ( t ) satisfies the differential equation F'(s) - (1/tx)sF(s)= 0.
(d) Note that the differential equation in part c is similar to that in part b. The
essential difference is that CI has been replaced with -l/a. From this
observation, conclude that the solution of the differential equation in part
d must be F(s) = Ke('/2a)sz, -00 < c < 00, in which K is a constant.
(e) To determine the constant K , show that K = s",- dt.
e-(a/2)t2
206 THE BILATERAL LAPLACE TRANSFORM

The value of the integral in part e can be shown to be so that

(f) Show that the Fourier transform off(t) exists and determine it.

6-19 The desired unit-impulse response of a LTI system is h d ( t ) = ePaltl with


a > 0.
(a) Show that the desired system is stable but not causal.
The desired system cannot be constructed because it is not causal.
Consequently, one technique considered is to construct a LTI system
with the unit-impulse response

This system would be stable and causal. Note that the difference is

(b) For a given input, x(t), show that the difference between the output of the
desired system delayed by to seconds and the output of the constructed
system is y,(t) = h,(t)*x(t).
(c) Show that for a bounded input for which Ix(t)l < M,, we have

so that the difference decreases for increasing delay and goes to zero as
to +. 00. Observe that, by this technique, the response of any stable but
noncausal LTI system can be obtained with arbitrarily small error by
accepting arbitrarily large delay.
We now examine how the transfer function of the constructed system
compares with that of the desired system. For this, we first determine the
transfer function of the constructed system. For this, well determine the
system function, H(s),and then lets =io to obtain the transfer function,
H U o ) . This can be done because the constructed system is stable so that
the o axis lies in the RAC. A nice way to determine the system function,
H(s), is to use the differentiation property of the Laplace transform.
PROBLEMS 207

(d) Show that h'(t) = e-"'o6(t) + g(t), where

+
(e) Show that g'(t) = ae-"'o6(t) - 2a6(t - to) a2h(t).
(f) Use the differentiation property together with the results of parts d and e
to show that H(s) is

(g) Because the constructed system is stable and causal, we have from
Section 6.2 that all the poles of H(s) must lie in the left half of the s
plane. Thus there should not be a pole of H(s) at s = a. Show that this is
so.
(h) Now obtain the transfer function of the constructed system and show that
for large enough delay, the gain of the constructed system is approxi-
mately that of the desired system and the phase shift of the constructed
system is approximately that of the desired system minus otoradians.
The difference, wto, is the phase shift due to the delay as discussed in
Section 5.7.
CHAPTER 7

THE INVERSE BILATERAL LAPLACE


TRANSFORM

The use of the Laplace transform simplifies and also lends insight into many time
domain operations. However, for it to be useful for our purposes there must be a one-
to-one mapping from the time domain to the s domain. That is, it is useful only if
there is only one transform, F(s) , for every time function,f(t), and vice versa. If the
mapping is one-to-one, then there must be a method by which the time function,f(t),
can be obtained from its Laplace transform, F(s). The time function so obtained is
called the inverse Laplace transform of F(s). We shall develop the formula for the
inverse Laplace transform and methods for evaluating it in this chapter.

7.1 THE INVERSE LAPLACE TRANSFORM

We begin by recapitulating our theoretical development of the Laplace transform.


We began by first defining the function

g(t) =f(t)e-"' (7.1-la)

[see Eq. (6.1-2a)l and choose (T so that g(t) is an L, function-that is, so that

I = 1
00

-00
Ig(t)l dt = 1
00

-52
If(t)e-"'I dt < 00 (7.1-1b)

[see Eq. (6.1-2b)l. We found that the values of (T for which I < 00 lie in an interval,
(T,< (T < ob, which we call the RAC off(t). This is the interval property of the
RAC obtained in Section 6.2. Thus, in accordance with our Fourier transform result,
Eqs. (5.1-l), (5.1-2), and (5.1-3), the Fourier transform of g ( t ) exists for (T in the
209
210 THE INVERSE BILATERAL LAPLACE TRANSFORM

RAC off(t). With the use of the definition of g(t), the Fourier transform of g ( t ) was
expressed as

(7.1-1C)

[see Eq. (6.1-3b)l. In accordance with Eq. (6.1-3c) and (6.1-4),

F(s) = F ( o +jo)= G(jw) (7.1- 1d)

where F(s) is the Laplace transform off(t). That is, the Laplace transform off(t) is
simply the Fourier transform of g(t).
To develop the inverse Laplace transform, we first note that because g(t) is an L ,
function for o in the RAC of f ( t ) , oa < o < rsb, we have in accordance with
Eqs. (5.1-3) that the inverse Fourier transform of g(t) exists (so that the mapping
of g(t) to GGo) is one-to-one) and is

g(t) = 1'271 --
00

G(jo)e'"' do (7.1-2a)

We now substitute Eqs. (7.1-la) and (7.1-ld) to obtain

Because the exponential, e-"', is not equal to zero for any value of ot, we can divide
both sides of this equation by ePutto obtain

The exponential was put under the integral because the integration is with respect to
o and the exponential is not a function of o.This equation is the desired inverse
transform. However, because all our expressions for the Laplace transform are in
terms of s, a nicer form of this expression is in terms of s and not in terms of its
component parts, o and o. To obtain the desired expression, we substitute
s = o +jo to obtain

To complete our substitution, we must express the integral in terms of s. For this, we
note that the integral is with respect to w, so that o is a constant with a value o = o,,
7.1 THE INVERSE LAPLACE TRANSFORM 211

which is within the RAC of f ( t ) . Thus ds = d(o, +j o ) =j d w or, equivalently,


d o = (1/j)ds. Thus the integral in Eq. (7.1-2d) in terms of s is

This is the desired expression for the inverse Laplace transform in terms of s.
Let us first note that we only used the inverse Fourier transform of G ( j o ) to
obtain Eq. (7.1-3a). Thus, because the mapping of g(t) to G ( j o ) is one-to-one,
we conclude that the mapping off(t) to F(s) for CJ in the RAC of f ( t ) also is
one-to-one. This means that the Laplace transform off(t) given by

F(s) = 1
00

-00
f(t)e-" dt, oU < c~ < o b (7.1-3b)

and the inverse Laplace transform given by Eq. (7.1-3a) are a transform pair. Thus, if
Eq. (7.1-3b) is evaluated withf,(t) to obtain F&), then the evaluation of Eq. (7.1-3a)
with Fa@)and o0 in the RAC off,(t) results in the same function,f,(t), with which
we started. For example, we determined in Section 6.1 that the Laplace transform of

f , ( t ) = Ae-"'u(t) (7.1-4a)

is

1
Fa($ = -, CJ > -a (7.1-4b)
s+a

Thus, from Eq. (7.1-3a), we have

AePu(t) =- e"' ds, 6, > -a (7.1-5)

Thus, if the integral in Eq. (7.1-5) were evaluated with a negative value o f t , the
value of the integral would be zero. Also, if a positive value o f t were used, the value
of the right side of Eq. (7.1-5) would be Ae-"'. To better understand the integration
in Eq. (7.1-5), consider an s-plane view of it as shown in Fig. 7.1-1. The figure is
drawn for a > 0. As shown, the integration is along a vertical line in the s plane
because cr, is a constant in the integral. The line is to the right of the pole at s = -a
because go > -a. The value of the integral will be the same no matter what value of
o0 is used as long as it is greater than -a. To actually perform this integration
requires results from an area of mathematics called complex variable theory. For our
study of LTI system theory, we won't need those results because we can make use of
our result that Eqs. (7.1-3) are a Laplace transform pair as we did to evaluate the
integral in Eq. (7.1-5).
212 THE INVERSE BILATERAL LAPLACE TRANSFORM

I"" Q

Fig. 7.1-1 Depiction of the integral in Eq. (7.1-5) (drawn for the case a > 0).

What would be the value of the integral in Eq. (7.1-5) if the integration were
performed with a value of oo which is less than -a? For this, first consider the result
obtained in Section 6.1 that the Laplace transform of

&(t) = BeKb'u(-t) (7.1-6a)

is

-B
F&) = s + b '
~
o i-b (7.1-6b)

Thus we have from Eq. (7.1-3a) that

BePbtu(--t) = - e"ds, 00 < -b (7.1-7)

Thus, if the integral were evaluated with a negative value oft, the value of the right-
hand side of Eq. (7.1-7) would be Be-bt. Also, if a positive value o f t were used, the
value of the integral would be zero. The s-plane view of this integral is shown in
Fig. 7.1-2.
The integration is along a vertical line as shown in the s-plane because oo is a
constant in the integral. The line is to the left of the pole at s = -b because
oo < -b. The value of the integral will be the same no matter what value of oo is
used as long as it is less than -b.

Line along which the


integral is evaluated

Fig. 7.1-2 Depiction of the integral in Eq. (7.1-7) (drawn for the case b < 0).
7.2 THE LINEARITY PROPERTY OF THE INVERSE LAPLACE TRANSFORM 213

We can now determine the value of the integral in Eq. (7.1-5) if the integration
were performed with a value of eo which is less than -a. For this, choose B = -A
and b = a in Eq. (7.1-7). We then have

-Ae-"'u(-t) = - 8dS, eo < -a (7.1-8)

The integrand in Eq. (7.1-8) is the same as that in Eq. (7.1-5). The only difference is
that the integration in Eq. (7.1-5) is along a line to the right of the pole while the
integration in Eq. (7.1-8) is along a line to the left of the pole. Note that the
integration along a line to the right of the pole results in a time function that is
zero for t < 0 while integration along a line to the left of the pole results in a
fimction that is zero for t > 0.
We now have evaluated the inverse Laplace transform integral for two cases. One
case is given by Eq. (7.1-5) and depicted in Fig. 7.1- 1. The second case is given by
Eq. (7.1-7) and depicted in Fig. 7.1-2. To continue our development of the inverse
Laplace transform, we now need a linearity property.

7.2 THE LINEARITY PROPERTY OF THE INVERSE LAPLACE


TRANSFORM

Let the Laplace transform off(t) be F(s) with the RAC ea < 0 -= eh.Now express
F ( s ) as the sum of two functions as

Then, with the use of the inverse Laplace transform equation, Eq. (7.1-3a), we obtain

where qo lies in the RACs of both fi ( t ) and f2(t). This property states that if we
express F ( s ) as the sum of two functions as given by Eq. (7.2-la) and obtain the
inverse Laplace transform of each hnction individually, thenf(t) will be given as the
214 THE INVERSE BILATERAL LAPLACE TRANSFORM

sum of two functions as given by Eq. (7.2-lb) in which the RAC offi(t) and the
RAC off2(t) overlap with o0 lying in the overlap.
As an illustration of this important property, consider the f i c t i o n

b-a
F(s) = -a < < -b (7.2-2a)
(s + a)(s + b) ' IS

This is the Laplace transform offd(t) determined in Section 6.1. First express F(s) as
the sum of two functions as

-1 1
F(s) = - -
s+b s+a'
+ -a < IS < -b (7.2-2b)

In accordance with the linearity property, the inverse Laplace transform is

f ( t ) = f i ( t ) +f2(t) (7.2-3a)

where

(7.2-3b)

and

We note in the integral forfi(t) that go < -b so that the integral is along a vertical
line in the s plane which is to the left of the pole at s = -b. Thus this integral is
exactly the same as that in Eq. (7.1-7) with B = 1, so that

f i ( t ) = epb'u(-t) (7.2-4a)

Also, we note in the integral forf2(t) that go > --a, so that the integral is along a
vertical line in the s plane which is to the right of the pole at s = -a. Thus this
integral is exactly the same as that in Eq. (7.1-5) with A = 1, so that

&(t)= e-"u(t) (7.2-4b)

Therefore we have in accordance with Eq. (7.2-3a)

This function isfd(t) given by Eq. (6.1-23) as it should be because Eq. (7.2-2a) is its
Laplace transform.
7.2 THE LINEARITY PROPERTY OF THE INVERSE LAPLACE TRANSFORM 215

+ +
The function F(s) = (b - a)/(s a)(s b) has two poles, one at s = --a and the
other at s = -b. From the properties of the RAC determined in Section 6.2, we
found that the RAC always is an interval that is between adjacent poles. Thus, there
are three possible RACs for this function. They are: CJ < --a, -a < CJ < -b, and
CJ > -b. The time function given by Eq. (7.2-4c) is the inverse Laplace transform of
F(s) for the case in which the RAC is -a < CJ < -b. We would obtain a different
time function if the RAC were one of the other two possibilities. There corresponds a
different time function for each different RAC. It is the RAC that makes the Laplace
transform a one-to-one mapping.
To illustrate, let us determine the time functions that correspond with the other
two possible RACs for F(s). For this, we first express F(s) as the sum of two
functions as in Eq. (7.2-2b):

F(s) =
b-a --
-1
(s+a)(s+b)-s+b
+-s +1a (7.2-5)

In accordance with the linearity property, we have that

(7.2-6a)

where, with CJ,, in the RAC off(t), we obtain

(7.2-6b)

and

(7.2-6~)

If the RAC is CJ < --a, we must choose CJ,, < -a in Eqs. (7.2-6). Then, for a value of
CJ,,which is less than --a, we have that the integral forfi(t) is along a vertical line in
the s plane which is to the left of the pole at s = -b. This integral is thus exactly the
same as that in Eq. (7.1-7) with B = 1 so thatf,(t) = e-b'u(-t). In the integral for
h(t),we note that the integral is along a vertical line in the s plane which is also to
the left of the pole at s = -a. This integral is thus exactly the same as that in
Eq. (7.1-7) with B = -1 and b = a so that h(t)= - P ' u ( - t ) . Consequently, for
CJ < -a, we have

We now consider the case for which the RAC of f ( t ) is CJ > -b. We then must
choose CJ,, > -b in Eqs. (7.2-6). For a value of CJ,, that is greater than -b, we have
that the integral forb ( t ) is along a vertical line in the s plane which is to the right of
216 THE INVERSE BILATERAL LAPLACE TRANSFORM

the pole at s = -b. Thus this integral is exactly the same as that in Eq. (7.1-5) with
A = -1 and a = b so thatfi(t) = -e&'u(t). In the integral forh(t), the integral is
along a vertical line in the s plane which is also to the right of the pole at s = -a.
Thus this integral is exactly the same as that in Eq. (7.1-5) with A = 1 so that
h(t)= e-"u(t). Consequently, for r~ > -b, we have

We note that, depending on the RAC, there are three possible time functions
corresponding to the function F(s) given by Eq. (7.2-5). If the RAC is r~ < --a,
then the time function is that given by Eq. (7.2-7). If the RAC is --a < 0 < -b, then
the time function is that given by Eq. (7.2-4c). If the RAC is 0 > -b, then the time
function is that given by Eq. (7.2-8). For uniqueness of the mapping, the RAC must
be specified! This is why the Laplace transform of any function must include a
specification of the RAC.
We shall see in Section 8.3 that an important class of Laplace transforms in
system theory is that in which F(s) can be expressed as the ratio of finite-degree
polynomials:

(7.2-9)

Such functions are called rational functions. The function F(s) in Eq. (7.2-5) is an
example of such a function in which m = 0 and n = 2. Our determination of the
inverse Laplace transform was facilitated by expressing F(s) as the sum of simple
fractions as in Eq. (7.2-5). This is true even in the general case given above. A
procedure for obtaining such an expression is called partial fraction expansion,
which is discussed in the next section.

7.3 THE PARTIAL FRACTION EXPANSION

An important class of Laplace transforms with which we shall be concerned is that in


which F(s) is a rational function, which is a function that can be expressed as the
ratio of finite-degree polynomials:

(7.3-1)

where a, # 0 and b, # 0. A rational function is called a proper rational function if


the degree of the numerator polynomial, m, is not greater than the degree of the
denominator polynomial, n. That is, a proper rational function is one for which
m 5 n. If m < n, the rational function is said to be strictly proper. The partial
fraction expansion technique is an algebraic method by which a strictly proper
7.3 THE PARTIAL FRACTION EXPANSION 217

rational function can be expressed as the sum of simple fractions as in Eq. (7.2-5) so
that the inverse Laplace transform can be determined as in Section 7.2. In our
development, please observe that the partial fraction expansion is an algebraic
identity that is valid for all values of s and not just for values of s in some particular
range.
Before discussing the partial fraction expansion, note that any rational function
for which m 2 n can be expressed as the sum of an (m - n)-degree polynomial plus
a strictly proper rational function by dividing the denominator polynomial into the
numerator polynomial. As an example, consider

bS4 + 2s3 + s2 + 4s + 5 = 3s2 - 5s + 6 + 13s + 13 (7.3-2a)


2s2 + 4s + 3
F(s) =
2s2 + 4s + 3

For this function, m - n = 4 - 2 = 2 so that the degree of the polynomial is two.


The expression of this function as a second-degree polynomial plus a strictly proper
rational function was obtained by the following division of the denominator poly-
nomial into the numerator polynomial:

+6
3s2 - 5s
2s2 + 4s + 346s" + 2s3 + s2 + 4s + 5
sS4+ 12s3 + 9s2
- 1oS3 - 8s2 + 4s + 5
(7.3-2b)
- 1os3 - 2oS2 - 15s
12s2 + 1Is + 5
12s2 + 24s + 18
13s + 13

Now, for the partial fraction expansion of a strictly proper rational function, we
first must factor the denominator polynomial in Eq. (7.3-1). In accordance with our
discussion of the fundamental theorem of algebra in Appendix A, there are exactly n
roots because the denominator is an nth-degree polynomial. Thus its factored form is

The roots are denoted byp, for k = 1,2, . . . , n because they are the poles of F(s). If
the roots of q factors are equal, the root is said to be a qth-order root. For example, in
the fourth-degree polynomial

s4 - IOs3 + 38s2 - 66s + 45 = (s - 2 - j ) ( s - 2 + j ) ( s - 3)(s - 3)


= (s - 2 -j ) ( s - 2 +j)(s - 312
218 THE INVERSE BILATERAL LAPLACE TRANSFORM

the roots at s = 2 + j and s = 2 -j are first-order roots. First-order roots are usually
called simple roots. Because the root at s = 3 occurs twice, it is called a second-
order root. We first describe the partial fraction technique for the case in which all
denominator roots, the poles of F(s), are simple.

Case for Which All Poles Are Simple The partial fraction expansion of a
strictly proper rational function in which all poles are simple is

F(s) =
+ + + a l s + a.
urnsrn am-l~rn-l . . .
bnsn+ bn-1sn-' + . . . + bls + bo

in which the coefficients, ck, can be determined by the formula

The form of the expansion in Eq. (7.3-4a) is obtained by choosing the most general
form for which the least common denominator is that of the given F(s). If the
fractions of the expansion were added together, the numerator of the resulting
rational function would be a polynomial of degree less than n so that the sum
would be a strictly proper rational function with the desired denominator. The
constants ck could be determined by choosing them so that the numerator of the
sum is the same as that of the given F(s). However, a better method is to use
Eq. (7.3-4b) to determine them. To shown the validity of this equation, consider
the case for k = 1. Then from Eq. (7.3-4a),

(S-pI)F(s)=cl + S-Pn
(7.3-5)

The second term on the right-hand side of Eq. (7.3-5) is zero when s = p 1 because
p k # p l for k = 2,3, . . . ,n. Thus we obtain c1 in accordance with Eq. (7.3-4b) by
letting s = p l . This establishes the validity of Eq. (7.3-4b) for k = 1. The proof is
similar for any other value of k. As an illustration, consider

+ +
3 2 2s 1
+ +-+-
F(s) = --C1 c2 c3 (7.3-6a)
+
(s + +
l)(s 2)(S 3) - s 1 s +2 s +3
7.3 THE PARTIAL FRACTION EXPANSION 219

In accordance with Eq. (7.3-4b),

=
3s2 + 2s + 1 3-2+1
=1 (7.3 -6b)
CI
+ 2)(s + 3)
(s
s=- I

3s2 + 2s + 1 12-4+1
c2 = -
- = -9 (7.3-6~)
(s + l)(s + 3) s=-2 (-1)(1)

and

(7.3-6d)

Thus the partial fraction expansion of the given F(s) is

3s2+2s+ 1 - 1 -9 11
F(s) =
(s+ l ) ( s + 2 ) ( s + 3 ) - s + 1
+-+-
s+2 s+3
(7.3 -6e)

As a verification of this result. we can add the fractions to obtain

(s + 2)(s + 3) - 9(s + l)(s + 3) + 1 l(s + l)(s + 2) - 3 2 + 2s + 1 = F(s)


(s + l)(s + 2)(s + 3) (s + l)(s + 2)(s + 3)
(7.3-6f)

Case for Which There Are Higher-Order Poles If some of the poles are not
simple, the procedure described above must be modified because the form given by
Eq. (7.3-4a) is not the most general form for which the least common denominator is
that of the given F(s). To simplify our discussion, we first consider the case in which
there is one nth-order pole and no other poles. We then will extend the partial
fraction expansion technique to include cases in which there are poles of various
orders.
The partial fraction expansion of a strictly proper rational function in which there
is only one nth-order pole and no other poles is

(7.3-7a)

in which the coefficients of the expansion can be determined by the formula

(7.3-7b)
220 THE INVERSE BILATERAL LAPLACE TRANSFORM

The form of the expansion in Eq. (7.3-7a) is obtained by choosing the most general
form for which the least common denominator is that of the given F(s). If the
fractions of the expansion were added together, the numerator of the resulting
rational function would be a polynomial of degree less than n so that the sum
would be a strictly proper rational function with the desired denominator. The
constants could be determined by choosing them so that the numerator of the sum
is the same as that of the given F(s). A better method to determine the coefficients is
to use Eq. (7.3-7b). The validity if this equation is easily verified by noting from
Eq. (7.3-7a) that

We then note that

This is Eq. (7.3-7b) for k = 0. The derivative of Eq. (7.3-8a) is

from which we note that

This is Eq. (7.3-7b) for k = 1. The second derivative of Eq. (7.3-8a) is

d2
-(s - p J F ( s ) = (n - 2)(n - l ) C l ( S - p I ) n - 3+ (n - 3)(n - 2)c,(s - p l y 4
ds2
+ . . . + c,-3(2)(3)(s -PI) +m , - 2 (7.3-8C)

from which we note that

S=PI

This is Eq. (7.3-7b) for k = 2. By continuing this process, we arrive at the general
form for Eq. (7.3-7b).
To illustrate this method, consider

s2+3
F(s) = ~

+ - Cl
+ +-+-+
c2
(s 4)3 = (s 4) (s 412 (s +c3413 (7.3-9a)
7.3 THE PARTIAL FRACTION EXPANSION 221

In accordance with Eq. (7.3-7b),

and

1 d2 1 d2 1
[-e
c , = 2- ds2 + 4 ) 3 ~ ( s )s=-4
] = [dsi(s2 + 311
s=-4 = 2[2] = 1 (7.3-9d)

Thus the partial fraction expansion of the given F(s) is

s2 + 3 1 -8 19
F(s) = ~

(s +4p --
+
+-+-
- (s 4) (s + 4)2 (s + 4)3 (7.3-9e)

This result can be verified by adding the fractions.

The Partial Fraction Expansion of a General Rational Function The


partial fraction expansion of a general rational function is obtained by combining the
procedures described above. As an example, consider

3s3
F(s) = (7.3-loa)
(s + 1)2(s+ 2)
This rational function is not strictly proper because the degree of the numerator is
m = 3 and the degree of the denominator is n = 3. Thus the given function can be
expressed as a strictly proper rational function plus a polynomial with the degree
m - n = 3 - 3 = 0. The polynomial thus is just a constant, co. The strictly proper
rational function can be expanded in fractions. From our discussion above, the
general form of the total expansion of F(s) must be

F(s) = 3s3 = c o + -C1+ y + - c2 c3 (7.3- 1Ob)


(s + 1)*(S+ 2) s + 1 (s+ 1) s+2

We can determine co by dividing the denominator polynomial into the numerator


polynomial. However, because the expansion is an algebraic identity that is valid for
all values of s, we can use any valid mathematical technique to obtain any of the
coefficients of the expansion. For the determination of co, we note in Eq. (7.3-lob)
that

lim F(s) = 3 = co (7.3- 1OC)


IsI+c=
222 THE INVERSE BILATERAL LAPLACE TRANSFORM

so that co = 3. Also

(s
3s3
+ 112 -
+A
(s +] 112( s + 2) + c3 (7.3-1Od)

The first term on the right is zero for s = -2. Thus, be letting s = -2 in this
equation, we obtain c3 = -24. Note that this is the same technique of obtaining
the coefficients for simple pole as discussed above. Now, following the procedure for
higher-order poles, we consider

(s + 1)2F(s)= s 3s3
+2
-= cl(s + 1) + c2 + [co + $ ] ( s + 1)2 (7.3-1Oe)

We then note that

(7.3-1Of)

Also, the derivative of Eq. (7.3-10e) is

d
-(s
ds
+ 2)2F(s)= dd s s3s3
--
+2
(7.3- 1Og)

so that

d
[z(s + 2 ) W S ) ]s=- 1
[- -1
= d 3s3
ds s + 2
= 12 = C] (7.3-10h)

Observe that the coefficients corresponding to the second-order pole at s = -1 can


be obtained by the same technique used to determine the coefficients for higher-
order poles as given by Eq. (7.3-7b). Thus we note that Eq. (7.1-7b) is also applic-
able in the general case. Therefore the partial fraction expansion for this example is

3s3 12 -3 -24
F(s) =
(s + q2(S +2) =3+-
s+ 1
+-+-
(s+ 1)2 s+2
(7.3-1Oi)

The determination of the coefficient c1 by Eq. (7.3-10h) is a bit tedious because it


requires the differentiation of a fraction. However, as stated above, the expansion is a
mathematical identity that is valid for all values of s so that any valid mathematical
technique to obtain any of the coefficients of the expansion can be used. Note that co,
7.4 CONCLUDING DISCUSSION AND SUMMARY 223

c2, and c3 were determined beore c1. We thus could have determined c1 by starting
with

-3 -24
F(s) =
(s +
3s3
U2(S + 2) =3+- C1
+-+-
s + l (s+1)2 s+2
(7.3-1Oj)

This is an equation with only one unknown. Because this equation is valid for all
values of s, we just need to choose one value of s and solve for c,. A convenient
value to choose is s = 0 for which we have

The solution of this equation is c1 = 12 which is our previous' result. As in the


example above, the tedious calculation required by Eq. (7.3-7b) can sometimes be
avoided by making use of the fact that the expansion is a mathematical identity that
is valid for all values of s. The formulas developed for determining the coefficients
will always work. However, it is possible that some or all of the constants of a
specific fraction can be determined by a less tedious technique as illustrated above.

7.4 CONCLUDING DISCUSSION AND SUMMARY

We have shown in Section 7.1 that the Laplace transform is a one-to-one mapping so
that Eqs. (7.1-3a) and (7.1-3b) are a transform pair. It is for this reason that tables of
Laplace transforms are so usefd as illustrated in Sections 7.1 and 7.2. Often,
extensive tables are not necessary. The reason is that the inverse Laplace transform
often can be determined with a short table by using the partial fraction expansion
and Laplace transform properties.
Table 7.4-1 lists some of the specific Laplace transform pairs that we have
determined. Reference equation numbers are given for some listed transform pairs
so that you can review their determination. Some of the listed pairs are slight
generalizations of those given by the reference equations. Entries with numbers
followed by a letter are special cases of the entry with the same number without a
letter. In the table, z = a +jb.
Table 7.4-2 lists some of the specific Laplace transform properties that were
shown in Section 6.3. In the table, the Laplace transform offi(t) is F,(s) with the
RAC (To < (T < Ob.
We'll illustrate the technique for determining the inverse Laplace transform with a
few examples from which we shall draw some important general conclusions.

Example 1 For our first example, we determine the inverse Laplace transform of

3s3
F(s) = -2 < (T < -1 (7.4-1)
(s + 1I2(s+ 2) '
224 THE INVERSE BILATERAL LAPLACE TRANSFORM

TABLE 7.4-1 Some of the Specific Laplace Transform Pairs That We Have Determined
No. f( t ) F(4 RAC Ref. Eq.
1 I --co<a<cc
( n - l)!
2 n = 1 , 2 , 3 ,. . . a > -a 6.3-32
(s + z)"
1
2a a > -a 6.1-7
s+z

-
1
2b a10
S
-(n - l)!
3 n=1,2,3, ... a < -a
(s + z)"
-1
3a a < -a 6.1-13
s+z
-1
-
3b a<o
S

4
(s + a)cos(4) - coo sin(4) a > -a 6.1-20
(s + a)2 + W:

4a a > -a 6.1-21

4b a > -a 6.1-22

TABLE 7.4-2 Some of the Specific Laplace Transform Properties Shown in Section 6.3

No.
1

2
3
4
5

6
7
7.4 CONCLUDING DISCUSSION AND SUMMARY 225

For this, we first require the partial fraction expansion of F(s). This was determined
in Section 7.3. We have from Eq. (7.3-1Oi)

12 3 24
F(s) = 3 +- -~ - ~ -2 < o < -1 (7.4-2a)
s + l (s+1)2 s+2'

Thus, using the technique discussed in Section 7.2 and Table 7.4-1, we have

f ( t ) = 36(t) - 12e-'u(-t) + 3te-'u(-t) - 24e-2'u(t)


= 36(t) + (3t - 12)e-'u(-t) - 24eF2'u(t) (7.4-2b)

Note that the exponential, e-', is multiplied by a first-degree polynomial in t. With


the use of pairs 2 and 3 of Table 7.4- 1, we note that, generally, the inverse transform
a function with a pole of order k at s = a is the exponential e-" multiplied by a
(k - 1)-order polynomial in t.

Example 2 For our second example, we shall use the Laplace transform properties
to determine the inverse transform of

s+a
F(s) = o > -a (7.4-3)
+ +
(s a)2 m; '

This is pair 4a of Table 7.1- 1. However, we shall obtain the inverse transform of this
function by a circuitous route in order to illustrate how the transform properties can
be used to manipulate a function. We first note that by replacing (s a ) with s, we +
obtain
S
F,(s)= o>o (7.4-4a)
~

s2 + m; '
so that by use of property 4, we have

Thus we just need to determine fi (t). For this, we could expand F , (s) by partial
fractions. However, to hrther illustrate the uses of the properties, note that
F , (s) = sF2(s)where

1
F2(s) = o>o (7.4-5a)
~

s2 + m; '
so that, by property 6,

(7.4-5b)
226 THE INVERSE BILATERAL LAPLACE TRANSFORM

We now determinef,(t) by a partial fraction expansion of F2(s).Using the techni-


ques discussed in Section 7.3, we obtain
1 1 -1 1 1 1
F2(s) = ~

S2
-
+ 0;- (S +jOo)(S -jOo) -j20, S
+----
+j00 5200 S - j W o
, a>O

(7.4-6a)

Using the technique discussed in Section 7.2 and transform pair 2a, we then obtain

-1 1 1 .
+
~ ( t=)-e-jwofu(t) -dwofu(t)= - sin(wot)u(t) (7.4-6b)
5200 5200 0 0

The real form forf,(t) is obtained by using the identity derived in Appendix A,
Eq. A-15. We then obtain from Eq. (7.4-5b)

Note that the derivative, f i(t), does not have an impulse at t = 0 because f,(t) is
continuous there. We now use Eq. (7.4-4b) to obtain

f ( t ) = e-" cos(oot)u(t) (7.4-8)

This is the time fitnction given by transform pair 4a.

Example 3 To illustrate a very important property of rational functions, we


determine for our third example, the inverse Laplace transform of

F(s) = 3s2+2s+ 1 e"", cT > -1 (7.4-9)


(s + l)(s + 2)(s + 3)
This function is not a rational function. However, observe from pair 3 that if we
eliminate the exponential by defining the rational function

32 + 2s + 1 -1 (7.4-loa)
Fl(s) = (s + l)(s + 2)(s + 3) ' cT>

then

To determineh(t), we fist obtain the partial fraction expansion of F,(s). This was
determined in Section 7.3. From Eq. (7.3-6e),

1 9 11
F,(s) = -- -
s + l s+2 s+3'
+- a>-1 (7.4-1 la)
PROBLEMS 227

Thus using the technique discussed in Section 7.2 and also using transform pair 2a,
we have

fi(t>= [e-' - 9e-2' + 1~ e - ~ ' ] u ( t ) (7.4-1 lb)

By use of Eq. (7.4-1Ob), we then have

Note that f i ( t ) = 0 for t < 0 but, f ( t ) # 0 for t < 0 if to > 0. We showed in


Section 6.2 that if f ( t ) = 0 for t < 0, then the RAC is to the right of every pole.
We also showed by a counterexample that f ( t ) is not necessarily equal to zero for
t < 0 if the RAC is to the right of every pole. The function f ( t ) given by Eq. (7.4-12)
also is such an example. That is, we showed in Section 6.2, a necessary but not
suficient condition that f ( t ) = 0 for t < 0 is that the RAC be to the right of every
pole.
However, if F(s) is a proper rational function and the RAC is to the right of
every pole, then f ( t ) = 0 for t < 0. The reason is that then the inverse transform of
each of the terms of the partial fraction expansion is zero for t < 0 in accordance
with transform pairs 1 and 2. The h c t i o n f i ( t > given by Eq. (7.4-1 lb) is such an
example. Thus, we note that i f F ( s ) is a proper rational function so that m 5 n in
Eq. (7.3-1), then the inverse Laplace transform,f (t), equals 0for t < 0 ifand only if
the RAC is to the right of every pole.

PROBLEMS

7-1 For each function given below, determine all possible RACs and the inverse
Laplace transform for each possible RAC.
1 1
(a) FAs) = - -
s+2
+
s+3
1 I 1
@) Fd.4 = - - -
s-2
+
s+3 s-5
+
7-2 For each function given below, determine all possible RACs and the inverse
Laplace transform for each possible RAC.
(a) F,(s) = [s:-
2 +
s+3
1
-le-'.
e-' 8
(b) F~(s)
=- -
s+2 s+3
+
7-3 For each function given below, determine all possible RACs and the inverse
Laplace transform for each possible RAC.
1 1
(a) F,(s) =
+ + +
(s 2)
~

(s 312
228 THE INVERSE BILATERAL LAPLACE TRANSFORM

7-4 Determine the inverse Laplace transform of the following functions:


1
(a) CS > -2
+
= (s 2)(s + 3)
s+ 1
CS > -2
+
@) Fb(s) = (s 2)(s + 3)

7-5 Determine the inverse Laplace transform of each of the following functions:

7-6 Determine then inverse transform of F(s) = e'', -co < cs < 00.

7-7 Determine inverse Laplace transform of


(a) Fu(s)= cosh(zs), -00 < cs < 00
(b) Fb(s)= sinh(zs), -co < cs co

7-8 A simplified form of a transform that arises in feedback systems with delay is
F(s) = 1/(1 - e-'). cs > 0. In this problem, we illustrate one method for
determining the inverse transform, f ( t ) .
(a) We first determine the location of the poles. They are at those values
of s for which e-' = 1. To solve this equation, we have e-' =
e-(.+Jo) = e-'e-JW = 1. Show that the solutions of this equation are
s =j2kn for k = 0, f l , f 2 , . . . so that there are infinitely many poles
uniformly distributed along the o axis and the RAC is to the right of all
the poles.
(b) To determine the inverse transform, we expand F(s) in a power series.
Show that the power series expansion is
03 00

F(s) = (e-')" = e-"', CJ >0


n=O n=O
00

(c) Thus show thatf(t) = d ( t - n).


n=O
7-9 The system function of an LTI system is
,n + e2r7
H(s)= ' CS> -1
(s + 3)(s + 2)(s +
For what values of T is the given system causal?
PROBLEMS 229

7-10 Use pair 4b of Table 7.4-1 and property 7 of Table 7.4-2 to obtain the Laplace
transform o f f ( t )= te-'' sin(o,t)u(t).

7-11 Determine the inverse Laplace transform of F(s) =


s[(s +s +2)*2 + 41, o > o .
7-12 Determine the inverse Laplace transform of F(s) =
s2 + 2s + 1 > 0.
(s2 + l)(s + 3)' CJ

7-13 For each Laplace transform given below, do the following:


(a) Determine all possible RACs.
(b) For which possible RAC could the corresponding time function be the
unit-impulse response of a causal but not necessarily stable LTI system?
(c) For which possible RAC could the corresponding time function be the
unit-impulse response of a stable but not necessarily causal LTI system?
(d) For which possible RAC could the corresponding time function be the
unit-impulse response of a causal and stable LTI system?
S
1. F,(s) =-
s+2
S
2. F2(s) = -
s-2

s-1
4. F4(s)=
s[(s + 212 + 41
CHAPTER 8

LAPLACE TRANSFORM ANALYSIS OF


THE SYSTEM OUTPUT

8.1 THE LAPLACE TRANSFORM OF THE SYSTEM OUTPUT

The time-domain relation between the output, y(t), of an LTI system and its input,
x ( t ) , was derived and analyzed in Chapters 2 and 3. It was shown there that the
output, y(t), is equal to the convolution of the input, x ( t ) , with the system unit-
impulse response, h(t):
00

*
y(t) = h(t) ~ ( t=) (8.1- 1a)

It was also shown that convolution is commutative so that also

~ ( t=) ~ ( t* )h(t) = (8.1- 1b)

By use of the convolution property of the Laplace transform, transform property 5 of


Table 7.4-2, the transform of Eq. (8.1-1) is

and this equation is valid only for 0 in the overlap of the RACs ofy(t), h(t), and x(t).
The Laplace transform of h(t), H(s), is called the system function of the given
system. Recall that H ( j w ) is the transfer function while H(s) is the system function
of the given system. We can let s = j w in the system function, H(s), to obtain the
transfer function, H ( j w ) , only if the w axis, 0 = 0, lies in the RAC of h(t). Observe
that the relation between the input and output is a simple algebraic expression in the
231
232 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

frequency domain. Because algebraic expressions are easily manipulated, we can


perform many types of system analysis in the frequency domain much more easily
than in the time domain. We illustrate this with some examples.

Example 1 We begin with a simple example of determining the output of a known


system with a given input. We determine the output of an LTI system with the unit-
impulse response

h(t) = d ( t ) - e-‘u(t) (8.1-3a)

to the input

The system function, H(s), is the Laplace transform of h(t). From Table 7.4-1, we
have

1 S
H ( s ) = 1 --=- fJ> -1 (8.1-4a)
s+l s+l’

and

-E E -4E
X ( s ) = -+ -- -2<0<2 (8.1-4b)
s-2 s+2-(s-2)(s+2)’

The details of the determination of X ( s ) also can be obtained from Example 4 in


Section 6.1 with a = 2 and b = -2. Thus, from Eq. (8.1-2) we have

Y(s) = - 4Es
(s - 2)(s +
2)(s + 1) ’ -l<CJ<2 (8.1-5a)

The RAC of Y(s) is determined in the following manner: First. in accordance with
the interval property of the RAC, it must be an interval between the poles of Y ( s ) so
that the only possible choices are f~ < -2, -2 < cr < - 1, - 1 < f~ < 2, and cr > 2.
Second, in accordance with Eq. (8.1-2), the RAC ofy(t) must be chosen so that there
is an overlap of the RACs of x(t), y(t), and h(t). The only interval that satisfies these
conditions is the one given in Eq. (8.1-5a). We now determine y(t) as discussed in
Section 7.4. The partial fraction expansion of Y(s) is

2 1
Y(s) = --E-
3 s-2
+ 2E-s +12 - -E-,
4
3 s+l
1
-1<a<2 (8.1-5b)

so that, from transform Table 7.4-1, we have

(8.1-6)
8.1 THE LAPLACE TRANSFORM OF THE SYSTEM OUTPUT 233

In this example, the input, x(t), isfd(t) in Example 4 of Section 6.1 with a = 2 and
b = - 2 . It was shown in Section 6.1 that the Laplace transform of f d ( t ) does not
exist if a p b. Thus we could not determine y(t) as in this example if the system
input, x ( t ) , werefd(t) with a I6. However, for such cases, note that from Eq. (6.1-
24) we can expressfd(t) as the sum

With the use of superposition, the LTI system response can then be expressed as

where y,(t) is the system response tof,(t) and yb(t) is the system response tofb(t).
From Eq. (8.1-2) we then could use Laplace transforms to determine

Y,(s) = H(s)F,(s) and Y&) = H(s)F,(s) (8.1-7~)

The inverse Laplace transforms of each of these expressions then can be determined
as in the example above to obtain y,(t) and yb(t).The total output, y(t), then is the
sum of the two functions.
To illustrate this technique, let the input of the given system be

x ( t ) = e-"u(-t) + e-"'u(t), where a p b 1 (8.1-8)

We'll obtain the general expression for y(t) by the procedure described above and
then choose specific values for a and b. For this case,

f , ( t ) = e-"'u(t) and fb(t) = ePbtu(-t) (8.1-9)

so that from the results of Section 6.1 we obtain

1 -1
F,(s) = ~,
s+a
0 > -a, Fb(4 = s+b 1 CJ < -b (8.1-10)

Using H(s) given by Eq. (8.1-4a), we then have

s
1
Y,(s) = H(s)F,(s) = ~~ CJ > -a (8.1-1 la)
s+ ls+a'

and

s -1
Y,(s) = H(s)F,(s) = ~~ - 1 < CJ < -b (8.1-1 lb)
s+ls+b'
234 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

Expanding the above expressions by partial fractions and using Table 7.4-1, we
obtain

1
y,(t) = ~ [ e- d (8.1-12a)
1-a

and

(8.1-12b)

Thus the total system response is

4
If a = 0 and b = (Remember, this solution is valid only for a 5 b < 1) we have

For the special case in which a = b = 0 so that x(t) = 1 for all t, note that the system
response is y(t) = 0. The reason is that the system dc gain is zero because H ( 0 ) = 0.

Example 2 As our second example, we choose a problem type that occurs in


measurement. Consider the problem of measuring the pressure as a function of time
in a pressure boiler. For this, a recording pressure gauge is inserted in the vessel wall.
The recorded output is not exactly the pressure variation within the vessel because
the recording pressure gauge is somewhat sluggish. The problem then is to
determine the true pressure variation within the vessel from the recorded output.
For this, we must know the input-output relation of the gauge. The usual recording
pressure gauge can be modeled as an LTI system so that we must know its unit-
impulse response. However, because a good approximation of a pressure impulse is
difficult to generate, rather than attempt to measure the gauge impulse response, we
determine the gauge response to a step change in pressure. As discussed in Section
3.2, even if a narrow pulse of pressure were generated, a very large amplitude of the
pressure pulse would be required to obtain an accurate graph of the gauge reading.
However, the large required pressure amplitude would drive the gauge into regions
of nonlinearity for which the linear model of the gauge would not be valid. Thus,
before inserting the recording pressure gauge in the vessel, we experimentally
determine its responses to a step change of pressure.
For example, let the experimental input be

x , ( t ) = Pu(t) (8.1- 15a)


8.1 THE LAPLACE TRANSFORM OF THE SYSTEM OUTPUT 235

and the corresponding observed output be

yl(t) =p[i - e-3qu(t) (8.1- 15b)

This characterizes the recording gauge. Now, while in operation, let the observed
output pressure recording be

y(t) = 3[1 - e-‘]u(t) (8.1- 15 ~ )

This pressure reading starts at zero and gradually rises to three units. What was the
actual pressure variation within the vessel? For this, we have from Eq. (8.1-2) that

(8.1- 16a)

Now, the transform ofy(t) is, from Table 7.4-1,

3 3 3
Y(s) = - - = ~ ~
o>o (8.1- 16b)
s s+l s(s+l)’

We use Eq. (8.1-2) to determine H(s), because

(8.1- 17a)

From Eqs. (8.1- 15) and with the use of Table 7.4-1, we have

P P 3P
Y1(s)=---=- a>o (8.1- 17b)
s s+3 s(s+3)’

and

P
X,(s) = - , a>0 (8.1- 17 ~ )
S

so that

3P s 3
H(s) = -- a > -3 (8.1- 17d)
s(s + 3 ) P -- s + 3 ’-

As in the first example, the RAC was chosen so that the RACs of the three functions
overlap. Finally we have

Y(s) 3 s+3 - s+3


X(s)=---- - o>o (8.1- 1sa)
H(s) s(s + 1) 3 s(s 1)’
~

+
236 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

Again, the RAC was chosen so that the RACs of the three functions overlap. The
partial fraction expansion of X ( s ) is

3 2
X(s)=- -- 0>0 (8.1- 18b)
s s t l '

so that from Table 7.4-1 we have

x ( t ) = [3 - 2e-']u(t) (8.1- 19)

In contrast to y(t), the observed gauge reading, we observe that x(t), the actual
pressure within the vessel, really started at one unit and gradually rose to three units.

Example 3 As our third example, we choose a problem type that occurs in system
identification. The response of a given LTI system is observed to be

y(t) = eP2'u(t) (8.1-20a)

to the input

x(t> = [2e-3' - e-']u(t) (8.1-20b)

What is the unit-impulse response to the given system? In the time domain, we are
required to determine h(t) given x(t) and y(t) in Eq. (8.1-1). A problem of this type is
called an integral equation because the unknown function, h(t), is part of the inte-
grand just as a differential equation is one in which the unknown function is differ-
entiated. Integral equations are often difficult to solve. However, integral equations
of the convolution type with which we are concerned are not difficult to solve
because we can determine the unit-impulse response of a given LTI system by
using Eq. (8.1-2). First, the Laplace transforms of the input and the output are

2 1 - s+l
X ( s ) = -- - D > -2 (8.1-2 la)
s + 3 s+2-(s+2)(s+3)'

and

1
Y(s) = - D > -2 (8.1-2 1b)
s+2'

so that, from Eq. (8.1-2), we obtain

Y(s) = s
H(s) = - + 3 = 1 +- 2 (8.1-22)
X(s) s
~

+1 s+l
8.2 CAUSALITY AND STABILITY IN THE s-DOMAIN 237

The problem now is to choose the RAC of h(t) so that its RAC overlaps that of x(t)
and y(t). Note that only two possible RACs of h(t) are a < -1 and a > -1. If we
choose a < - 1, then there is the overlap -2 < rs < - 1. If we choose a > - 1, then
there is the overlap a > - 1. Thus, there is an overlap with either choice. This means
that either choice is mathematically valid! If we choose the RAC to be u < - 1, the
unit-impulse response would be

h,(t) = d(t) - 2e-'u(-t) (8.1-23a)

If we choose the RAC to be a > - 1, the unit-impulse response would be

h2(t) = d(t) + 2e-'u(t) (8.1-23b)

Either one of these systems would produce the given output for the given input.
Note that, for other inputs, the outputs of the two systems would differ so that
we could determine which of the two possibilities is the given system by observing
its response to other inputs. However, the given data are all that are available in our
problem. Thus the choice must be made using other considerations. For example, in
accordance with our discussion of causality in Section 3.5, we can be well-assured
that the system is causal if it is a physical one. Thus, if the unknown system is
known to be a physical system, we would choose the system unit-impulse response
to be h2(t) given by Eq. (8.1-23b).

8.2 CAUSALITY AND STABILITY IN THE S-DOMAIN

We observed in the last section that many types of analyses concerning LTI systems
are more easily done using Laplace transforms. A niajor concern in such analysis is
causality and stability. In this section, results we already obtained will be used to
analyze the causality and stability of an LTI system only from its system function,
H(4.

8.2A Causality
We showed in Section 3.5 that an LTI system is causal if and only if its unit-impulse
response, h(t), is equal to zero for t < 0. To examine what this condition imposes on
H ( s ) , we use the result obtained in Section 6.2, which was that if a functionf(t) = 0
for t < 0, then the RAC off(t) is to the right of all the poles of its transform, F(s).
However, we showed there that the converse is not necessarily true. That is, it is not
necessarily true that if the RAC off(t) is to the right of all the poles of F(s), then
f(t) = 0 for t < 0. From this, we immediately have the result: A necessary (but not
suficient) condition that an LTI system be causal is that the RAC of h(t) be to the
right of all the poles of H(s).
An important case that we shall discuss later is that for which H(s) is a rational
function. For that case we refer to our discussion in Example 3 of Section 7.4, where
238 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

it was shown that if F(s) is a proper rational function, then its inverse transform,f(t),
equals 0 for t < 0 if and only if the RAC off(t) is to the right of every pole. From
this we immediately have the following result:

If the system function, H(s), is a proper rational function, then the LTI system is causal
if and only if the RAC is to the right of every pole of H(s).

For the case in which H(s) is a rational function but not a proper rational func-
tion, we can express H(s) as a polynomial plus a strictly proper rational function as
discussed in Section 7.3. For example, let H(s) be the rational function given by Eq.
(7.3-2a) so that

bs4 + 2 2 + s2 + 4s + 5 = 3s2 - 5s + 6 + 13s+ 13 (8.2- 1)


H(s) = 2s2 + 4s + 3 2s2 + 4s + 3

We then have

Y(s) = H(s)X(s) = 3siX(s) - 5sX(s) + 6X(s) + 2s213s++4s13+ 3 X ( s ) (8.2-2)

By use of the Laplace transform differentiation property, property 6 of Table 7.4-2,


d2 d
we note that the inverse transform of the first two terms is 3 - x ( t ) - 5 -x(t).
dt2 dt
These terms are not operations on the future of the input, x(t), so that the system
is causal. We thus note that:

If the system function, H(s), is a rational function (not necessarily proper), then a
necessary and sufficient condition that the system be causal is that the RAC be to the
right of every pole of H(s). If H(s) is not a rational function, then the RAC being to the
right of every pole is a necessary but not a sufficient condition for the LTI system to be
causal.

8.2B Stability
We showed in Section 3.6 that an LTI system is BIBO-stable if and only if h(t) is an
L , function. That is, if and only if

(8.2-3)

From the L , property of the RAC discussed in Section 6.2, we immediately have the
following result:

An LTI system is BIBO-stable if and only if, in the s plane, the o axis lies in the RAC
of h(t)
8.2 CAUSALITY AND STABILITY IN THE %DOMAIN 239

To understand this result in more depth, recall our discussion in Section 3.6,
where it was shown that if the output of a given system contains the derivative of
the input, then that system is not BIBO-stable. The reason was based on the discus-
sion in Section 3.3, where it was shown that the derivative of a bounded waveform
that is discontinuous at some instant contains an impulse at that instant. Because the
impulse is not bounded, this would be a bounded input waveform for which the
output is not a bounded waveform.
With this in mind, let us consider the important special case in which H ( s ) is a
rational function. For example, let H(s) be the rational function given Eq. (8.2-1).
The Laplace transform of the output is then given by Eq. (8.2-2). As discussed
d2 d
above, the inverse transform of the first two terms is 3 -x(t) - S-x(t). If the
dt2 dt
input is the bounded waveform x(t) = sin(wot)u(t), then the output due to these
+
terms is 3w06(t) - [3wi sin(wot) 5w0 cos(oot)]u(t), which is unbounded due to
the impulse. We can see that, generally, the system output will contain terms that are
the derivative of the system input if the degree of the numerator polynomial of H(s)
is greater than that of the denominator polynomial. In such a case there always is an
input for which the output contains an impulse as in our example. We thus observe
that the system is not stable if the degree of the numerator polynomial exceeds that
of the denominator polynomial. We thus conclude the following:

If H ( s ) is a rational function, then the LTI system is BIBO-stable if and only if it is a


proper rational function (so that the degree of the numerator polynomial is less than or
equal to that of the denominator polynomial) and the w axis lies in the RAC of h(t).

To illustrate these results, consider the system function

S2
H(s) = (8.2-4)
(s - 2)(s + 3)
This is a proper rational function with a second-order zero at s = 0, a pole at s = 2,
and a pole at s = - 3 . In accordance with the interval property of the RAC, there are
three possible RACs: CJ > 2, -3 < CJ < 2, and CJ < -3. Let us consider the causality
and stability of this system for each of these cases.

1. If the RAC were CJ > 2, the system would be causal because the RAC would
be to the right of both poles. However, the system would not be stable because
the RAC does not contain the w axis (which is the line CJ = 0).
2 . If the RAC were -3 < CJ < 2, the system would not be causal because the
pole at s = 2 is to the right of the RAC. However, the system would be stable
because the RAC would contain the w axis and H(s) is a proper rational
function.
3 . If the RAC were G < -3, the system would not be causal because there are
poles to the right of the RAC. Also, the system would not be stable because
the w axis would not be in the RAC.
240 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

We thus note that the LTI system with the system function given by Eq. (8.2-4)
cannot be both causal and stable. For it to be both causal and stable, we would
require all the poles to be to the left of the RAC and would also require the RAC to
contain the o axis. But both these conditions can be satisfied only if all poles lie in
the left half of the s plane. We thus observe the following:
For the case in which H(s) is a rational function, the system is both stable and
causal if and only if H(s) satisfies the following conditions:

1. It is a proper rational function.


2. The RAC is to the right of all its poles.
3. All its poles are in the left half of the s plane.

The first requirement arises from stability considerations because, as discussed


above, it assures us that no bounded input waveform will produce an output that
contains an impulse. The second requirement is, as we discussed, a necessary and
sufficient condition for causality. The third condition assures us that the system is
stable because the region to the right of all the poles contains the o axis.

The System lnverse The system inverse was defined in Chapter 1, Section 1. It
was shown there that, for any system, a system inverse exists if and only if the
system mapping is one-to-one. Unfortunately, this condition is not very useful in
determining a system inverse. In practice, we are concerned not only with the
existence of an inverse, but also with determining the inverse if one exists. This
determination can be done for LTI systems with the results we now have obtained.
For this, consider Figure 8.2-1. In the figure, the system function of the given system
is H(s). It is connected in tandem with its system inverse with the system function
G(s). Observe that the tandem connection is an LTI system with an output equal to
its input. Thus the unit-impulse response of this system is d ( t ) , for which the Laplace
transform is 1 and the RAC is all values of CJ. Therefore the system function of the
tandem connection is

H(s)G(s)= 1 (8.2-5)

This equation is valid for all values of 0 which lie in the overlap of the RACs of H(s)
and G(s). The algebraic solution of this equation for G(s) is

(8.2-6)

; x(t
I y(fy G(s)
,........................................................................... !

Fig. 8.2-1 Tandem connection of an LTI system and its inverse.


8.3 LUMPED PARAMETER SYSTEMS 241

Consider the case for which H(s) is a rational function. Clearly then, G(s) is a
rational function. From our previous discussion, the system inverse is both stable
and causal if and only if G(s) satisfies the following conditions:

1. It is a proper rational function.


2. The RAC is to the right of all its poles.
3. All its poles are in the left-half of the s plane.

Because G(s) is the reciprocal of H(s), we note that the poles of G(s) are the zeros
of H(s) and the zeros of G(s) are the poles of H(s). Thus we can translate the
conditions for the causality and stability of the system inverse to conditions on
the given system. For example, if the given system is causal and stable, then a
causal and stable system inverse exists if and only if H(s) satisfies the following
conditions:
1. H(s) has the same number of zeros as poles (because both H(s) and G(s) must
be proper rational functions).
2. All the poles of H(s) are in the left half of the s plane (because the given
system is causal and stable).
3. All the zeros of H(s) are also in the left half of the s plane (because these are
the poles of G(s) which must be in the left half of the s plane in order for the
system inverse to be both causal and stable).

System functions with all its poles and zeros located in the left half of the s plane
are called minimum-phase functions, which we discuss in Section 9.3. They are
important not only in relation to system inverses, but also, as we discuss in Section
8.4, in relation to impedance functions.

8.3 LUMPED PARAMETER SYSTEMS

The reason for our specific interest in rational functions is that the system function of
any lumped parameter LTI system is a rational function. Such systems are an
important class that we discuss in this section.
There are two major classes of LTI systems for which the input and output can be
related by a differential equation: lumped parameter systems and distributed para-
meter systems. In mathematical terms, an LTI lumped parameter system is one for
which the input, x(t), and output, y(t), can be related by a total differential equation
with constant coefficients. The differential equation relating the input, x(t), and
output, y(t), of an LTI distributed parameter system is a partial differential equation
with constant coefficients. This is the essential mathematical difference. Physically, a
lumped parameter system is one in which each of the system elements is located in a
given place (Le., the system elements are lumped). An example of an LTI lumped
parameter system is an electric network that is composed of linear inductors,
resistors, and capacitors whose values do not vary with time. The node and loop
242 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

equations of the network are total differential equations with constant coefficients
whose values are determined by the network elements. Another example is a
mechanical system composed of masses, springs, and dash pots whose values do
not vary with time.
A distributed parameter system is one in which the elements are not located in
one place but, rather, are distributed throughout the space occupied by the system.
An example of an LTI distributed parameter system is an electric transmission line.
The resistance, inductance, and capacitance of the transmission line are distributed
along the transmission line wires. The system model of electromagnetic wave propa-
gation in free space is a distributed parameter system. Similarly, the system model of
sound propagation in air or water is a distributed parameter system because the
physical parameters that determine the sound wave propagation are distributed
throughout the air or water.
We shall concentrate on lumped parameter systems. In our discussion, some of
the essential differences of the characteristics of the two types of systems will be
pointed out. As stated above, the input, x(t), and output, y(t), of a lumped parameter
LTI system can be related by a total differential equation with constant coefficients:

The coefficients are real numbers whose values are determined by the lumped
parameters of the LTI system. The determination of the output, y(t), for all time
obviously requires knowledge of the input for all time. If the input is only known
after some time instant, to, and not before to, then the output can be determined for
t > to only if the system initial conditions are known. In this section we shall use
Laplace transforms to analyze the case in which the input is known for all time. In
Section 8.5, we'll use the Laplace transform to analyze the case in which the input is
only known after some time instant.
To obtain the Laplace transform of Eq. (8.3-1) for the case in which x ( t ) is
known for all time, we first note from property 6 in Table 7.4-2 that the RAC
of the derivatives of x ( t ) is at least that of x(t). Thus the RAC of each term of the
right-hand side of Eq. (8.3-1) overlaps so that, from property 1 of Table 7.4-2, the
Laplace transform of the right-hand side of Eq (8.3-1) is equal to the sum of
the Laplace transforms of each term with the RAC being that of x(t). Similarly,
the Laplace transform of the left-hand side of Eq. (8.3-1) is equal to the sum of the
Laplace transforms of each of its terms, with the RAC being that of y(t). Now, the
left-hand side of Eq. (8.3-1) is equal to its right-hand side so that the Laplace
transform of its left-hand side is equal to that of its right-hand side. Thus, with
the use of property 6 in Table 7.4-2, the Laplace transform of Eq. (8.3-1) is

b,s"Y(s)+ b"-,s"-'Y(s) + . . . + b,sY(s) + b,Y(s) (8.3-2a)


= a,s"X(s) + am-ls"-~X(s)+ . . . + a1sX(s)+ a(J(s)
8.3 LUMPED PARAMETER SYSTEMS 243

with cs in the overlap of that ofx(t) andy(t). This is an algebraic equation that we can
factor

[b,s" + bn-lsn-l+ . . . + b l s + bo]Y(s) (8.3-2b)


+ arn-Ism-l+ . . . + a l s + a o ~ ( s )
= [urnsrn

and solve for Y(s)

Y(s)=
+ + + + -w
urnsrn arn-l~rn-l . . . a l s a.
(8.3- 2 ~ )
b,s" +b,-ls"-' + +
. . . + bls bo
with cs in the overlap of the RACs of x ( t ) and y(t). This equation is in the form

where

H(s) =
+
urnsrn arn-lsrn-l+ . . . + a,s + a o
(8.3-3b)
b,s" + +
b,-l~"-' . . . + bls + bo

The inverse transform of Eq. (8.3-3a) is, from transform property 5 of Table 7.4-2,

YO) = h(t) * 4 4 (8.3-4)

where H(s) is the Laplace transform of h(t) so that H(s) is recognized as the system
function of the lumped parameter LTI system. We thus observe from Eq. (8.3-3b)
that the system function of any lumped parameter LTI system is a rational function.
The denominator polynomial of H ( s ) is the coefficient polynomial of Y(s), and the
numerator polynomial is the coefficient polynomial of X(s). Consequently,

The poles of H ( s ) are the roots of the coefficient polynomial of Y(s), and the zeros of
H ( s ) are the roots of the coefficient polynomial of X(s).

Note that if H ( s ) is known, then the differential equation relating x(t) and y(t) can be
obtained by starting with Eqs. (8.3-2c), cross-multiplying to obtain Eq. (8.3-2b), and
then using Eq. (8.3-2a), from which the differential equation, Eq. (8.3-l), is
obtained. In this manner, we not only can obtain H ( s ) from the differential equation,
but we also can obtain the differential equation from H(s). Observe that the system
function, H(s), is a rational function if and only if the LTI system is a lumped
parameter system. To determine the RAC of H(s), we must know something
about the system being analyzed. For example, if the system is known to be
causal, then, from our discussion in the previous section, the RAC must be to the
right of all the poles.
In general, the Laplace transform of an impulse response is not necessarily a
rational function. Consequently, because the system function of a lumped parameter
244 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

system must be a rational function, not every impulse response can be realized by a
lumped parameter system. For example, consider a delay system for which the
output, y(t), is the input, x(t), delayed by to seconds so that y(t) = x(t - to) for
any input, x(t). From the time-shift property of the Laplace transform (property 3
of Table 7.4-2), the Laplace transform of this equation is Y(s) = X(s)ePS'o,with the
RAC being that of x ( t ) . Thus, in accordance with our discussion in Section 8.1, the
system function of the delay system is H(s) = e-''o, with the RAC being the whole s
plane. This system function can be that of a distributed parameter system, but it
cannot be that of a lumped parameter system because the exponential function, e-''o,
is not a rational function. This means that a delay system cannot be realized by a
lumped parameter system. This is a fundamental theoretical restriction and not just a
practical limitation that can be overcome someday as technology develops. However,
all is not lost. By approximating the exponential function e-"o by a rational function,
we can obtain an approximate model of a delay system by a lumped parameter
system.
Assume B = 0 lies in the RAC of x(t). We then have Y ( j w )= H ( j w ) X ( j o )so
that

(8.3-5)

If, in the frequency range o > W , X ( j w ) is small so that H ( p ) X ( j o ) is small, we


then have as an approximation to y(t)

(8.3-6)

From this result, we observe that to approximate a delay of to seconds, we then only
need to make H ( j w ) % e-Joto in the frequency range 0 5 w < W . This observation
is the basis of obtaining a lumped parameter system for which y(t) % x(t - to) for
such input waveforms.
There are many techniques available for obtaining this approximation. Each
technique results in a different type of approximation, so that the specific technique
that one should use is determined by application of the approximate delay system.
One approximation technique that is used is the Pad6 approximation. A Pade
approximation technique is one in which the power series of a given function is
approximated by a rational function in which the degree of the numerator polyno-
mial is M and the degree of the denominator polynomial is N . The coefficients of the
polynomials are chosen so that the power series expansion of the rational function
agrees with the power series expansion of the function H(s), being approximated
8.4 PASSIVE SYSTEMS 245

+
through the term of degree M N . For example, the first two Pad&approximations
for which M = N of the system function H(s) = P ' O are'

tos - 2 2
H , (s) = - - rJ> -- (8.3-7a)
+
tos 2 ' to

and

tis' - 6tos + 12 3
rJ> (8.3-7b)
H2(s) = tis2 + 6tos + 12 '
--
t0

The output of each of these causal lumped parameter systems will be approximately
equal to the input delayed by to seconds if to is sufficiently small. For the same input,
x(t), the output of the lumped parameter system with system function H2(s) will be
better approximation to x(t - to) than the output of the system with the system
function H , (s). Note, however, that these systems are only approximate models of
a delay system. Depending on the application, there are other possible approxima-
tions of a delay system by a causal and stable lumped parameter system. It is
important to remember that approximate models of delay are required because it
is theoretically impossible for a lumped parameter system to model a delay system
exactly; for that, a distributed parameter system is required.

8.4 PASSIVE SYSTEMS

Causality and stability impose certain constraints on the unit-impulse response, h(t),
of an LTI system. In Section 5.1 1 we discussed a number of the constraints they
impose on the transfer function H ( j o ) , and in Section 8.2 we discussed a number of
the constraints they impose on the system function H(s). Additional constraints on
the system function arise from other physical considerations such as that discussed
in Section 8.3. In this section we shall illustrate how physical properties of a system
can be used to determine properties of the system function by determining some of
the properties of the impedance function of a passive system. Our considerations
also will serve to review and lend further physical significance to some of the results
we have developed. For our discussion, we shall use an electric network in which the
terminal quantities are voltage and current. Our discussion, however, also applies to
passive mechanical networks in which the terminal quantities are, for example, force
and velocity.

' A table of Pad6 approximations for various values of A4 and N is in Wall, H. S., Continued Fractions,
Chapter 20, Van Nostrand, New York, 1948.
246 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

8.4A Two-Terminal Systems


A two-terminal electric network is shown in Fig. 8.4-1. The terminal voltage is u(t)
and the terminal current is i(t). This can be modeled as a system. If a voltage source
were connected to the terminals, the terminal voltage would be the system input and
the terminal current would be the system output. Similarly, if a current source were
connected to the terminals, the terminal current would be the system input and the
terminal voltage would be the system output. The system is LTI if the network is
composed of linear and time-invariant elements.
For the case in which the system is LTI and the input is the current, the output
voltage can be expressed as the convolution
00

u(t) = z(t) * i(t) = (8.4-1)

in which system unit-impulse response, z(t), is the system voltage response to an


input current unit impulse. That is, u(t) = z(t) volts when i(t) = d(t) amperes. Physi-
cally, a current unit impulse is achieved by applying a charge, q(t) = u(t) coulombs,
because current is the time rate-of-change of charge. An equivalent relation is

q(t) = f
-m
i(z) dz (8.4-2)

Physically, u(t) = z(t) volts if (at some instant that we call t = 0) a charge of one
coulomb were placed on the terminals. In accordance with our discussion in Section
3.5, we expect the system to be causal because it is a physical one. Thus we expect
z(t) = 0 for t < 0. Although one cannot prove that every physical system must be
causal, we will prove that every linear (time-invariant or time-varying) passive
system must be causal. This very interesting and important result strengthens our
unproven conviction discussed in Section 3.5 that all physical systems are causal.

Passivity We begin by defining a passive system in terms of its terminal


quantities, u(t) and i(t). First, the instantaneous power absorbed by the system is

p(t) = u(t)i(t) watts (8.4-3)

and the energy absorbed by the system, w(t), is

p ( z ) dz =
SI, u(z)i(z)dz joules (8.4-4)

Fig. 8.4-1 A two-terminal network.


8.4 PASSIVE SYSTEMS 247

We define a passive system as one for which w(t) 2: 0 for all t . We would have
w(to) < 0 at some instant t = to,if the system delivered more energy than it received
up to that instant. Thus a passive system is one that can never deliver more energy
than it has received.

Causality From our basic discussion of causality in Section 3.5, a linear time-
invariant or time-varying system is causal if, for any value of to,the system response
to any input, x(t), for which x(t) = 0 for t < to is y(t) for which y(t) = 0 for t < to.
We shall use this to prove that any linear TI or TV passive system is causal.
For this, we first let u l ( t ) be the output due to an input il(t).Note that the input
i ,( t ) can be any input of our choosing. Then because the system is passive, we have

(8.4-5)

We now choose another input i2(t) which is zero for t < to and let the system
response to it be u2(t). We now shall prove that if the system is passive then the
system must be causal by proving that passivity requires that u2(t) = 0 for t < to.For
+
this, we consider the system input i(t) = i , ( t ) ci2(t)in which c is some constant
whose value is at our disposal. Because the system is linear, we have by super-
+
position that the system response to the input i(t) is u(t) = u1( t ) cu2(t). From Eq.
(8.4-4), the energy absorbed by the system at the time t = to is

w(to)=
s", u(t)i(t)dt 2 0 (8.4-6a)

= 1 to

-co
[ul(t) + cu2(t)][il(t)+ ci2(t)]dt 2 0 (8.4-6b)

= / to

-cc
ul (t)il( t ) dt +c --oo
u2(t)il( t ) dt 2 0 (8.4-6~)

= wl(t0) +c 1to

-cc
u2(t)i,(t)dt 1 0 (8.4-6d)

The value of each of the two terms in Eq. (8.4-6b) involving i2(t)is zero because
i2(t)= 0 for t < to and we are only integrating over values o f t less than to.Because
the system is passive, wl(to) 2 0 and w(to)L 0 in Eq. (8.4-6d). Now c is at our
disposal. Thus if the value of the integral in Eq. (8.4-6d) involving u2(t) were not
zero, we could choose the value of c so that w(to) 0, in contradiction to the
passivity requirement. Thus we require the value of the integral involving u2(t) to
be zero. Now, if u2(t) # 0 for t < to,then, because i l ( t )is any input of our choosing,
we could choose it so that the value of the integral in Eq. (8.4-6d) is not zero for the
given u2(t). Thus we require u2(t) = 0 for t < to. The restriction t < to is because the
248 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

integral is only over that range. We thus have proved that u2(t) = 0 for t < to so that
the system is causal. Consequently,

Any linear TI or TV passive system must be causal.

If the linear passive system is TI, then, in Eq. (8.4-l), the unit-impulse response,
z(t) = 0 for t < 0, in accordance with our discussion of causality in Section 3.5.

The lrnpedance Function The Laplace transform of Eq. (8.4-1) is

V ( s )= Z(s)Z(s) (8.4-7)

The function Z(s) is called the impedance function of the passive network. The RAC
must be to the right of all the poles of Z(s) because we have shown that the system
must be causal.
In network theory, z(t) is called the open-circuit natural response. The reason is
that v(t) = z(t) when i(t) = d ( t ) so that the terminals are an open circuit for t > 0
because i(t) = 0 for t > 0. As we discussed above, the unit impulse of current is
obtained by placing one coulomb of charge on the terminals at t = 0, which causes
the network to “ring” similar to the ringing of a bell. This “ringing” is the sum of a
number of oscillations whose frequencies are called the open-circuit natural frequen-
cies. The oscillations cannot grow with time because the system is passive. We
generally would expect the amplitude of all the oscillations to decrease with time.
However, they will not decrease if the network is lossless. It also is possible that the
amplitude of just some of the oscillations may not decrease with time if the network
contains strategically placed lossless components. Thus we conclude that, for a
passive system, the amplitude of z(t) cannot increase with time. This means that
the poles of Z(s), whose locations are the open-circuit natural frequencies, must lie
in the left half of the s plane or possibly on the w axis. We arrive at this conclusion
using our discussion in Section 7.4 from which we have that, because the RAC is to
the right of all the poles of Z(s), a pole in the right half of the s plane would result in
a term of z(t) whose amplitude increases with t . On the other hand, a pole on the w
axis results in a term of z(t) whose amplitude does not increase or decrease with t,
and a pole in the left half of the s plane results in a term of z(t) whose amplitude
decreases with time. Thus we conclude that all the impedance function poles of a
passive system must lie in the left half of the s plane or on the w axis. Because the
RAC is to the right of all poles, the RAC would contain the w axis only if there are
no poles on the w axis.
If the w axis lies in the RAC, we can let s =j w in Z(s) to obtain Z(jw), which is
called the input impedance. Thus the input impedance is the Fourier transform of the
unit-impulse response, z(t). For this case, the input

+
i(t) = A C O S ( W ~4~) (8.4-8a)
8.4 PASSIVE SYSTEMS 249

would, in accordance with our discussion in Section 4.2, produce the output

u(t) = AIZ(jo0)I cos[oot + 4 + O(o,)l (8.4-8b)

where

Thus the instantaneous power absorbed by the system

Because the average value of a sinusoid is zero, we note from this result that the
average value of p ( t ) is the constant term so that

Pa, = SA2 IZ(joo)l COS[~(W,)]


= iA2Re{Z(joo)] (8.4- 10)

Observe that w(t) would be negative if Pa, were negative. Because w(t) 2 0 for
passive systems, we conclude that, for passive systems,

Thus, the real part of the impedance of a passive system cannot be negative at any
frequency.

The Admittance Function The solution of Eq. (8.4-7) for Z(s) is

Z(s) = Y(s)V(s) (8.4- 12)

where

(8.4- 13)

The finction Y ( s )is called the admittance function of the two-terminal network. The
inverse transform of Eq. (8.4-12) is
03

*
i(t) = y(t) u(t) = (8.4-14)

In network theory, y ( t ) is called the short-circuit natural response. The reason is that
i(t) = y ( t ) when u(t) = 6(t) so that there is a short circuit across the terminals for
t > 0 since u(t) = 0 for t > 0. In this form of representation, the roles of i(t)and u(t)
250 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

are interchanged from that of our previous discussion. Note that all of our previous
discussions and proofs are valid if the roles of the voltage and current are inter-
changed. Thus we can immediately conclude that y(t) = 0 for t < 0. Furthermore,
we conclude that all the poles of Y(s), whose locations are the short-circuit natural
frequencies, must be in the left half of the s plane or on the o axis and the RAC is to
the right of all the poles. If the o axis lies in the RAC, we can let s = j w in Y(s)to
obtain Y ( j o ) ,which is called the input admittance. Thus the input admittance is the
Fourier transform of the unit-impulse response, y(t). Also, the real part of the input
admittance cannot be negative at any frequency.

Lumped Parameter Passive Systems In accordance with our discussion in


Section 8.3, Z(s) and Y(s) are rational fimctions if the network is composed of
lumped parameters. Thus, for lumped parameter networks, the form of Z(s) must be

Z(s) =
a,sm + .. . + als + ao - a,(s - zl)(s - z), . . . (s - z,) (8.4-15)
b,s" + . . . + bls + 6, -
b,(s - p l ) ( ~-p2). . . ( S -pn)
In accordance with our discussion of passive networks, all the poles of a passive
network lie in the left half of the s plane or on the o axis, and the RAC is to the right
of all the poles. Note from Eq. (8.4-13) that the zeros of Z(s) are the poles of Y(s).
Because the poles of Y(s)also must lie in the left half of the s plane or on the o axis,
we have the result that the zeros of Z(s) also must lie in the left half of the s plane or
on the o axis. Rational functions in which all the poles and zeros lie in the left half
of the s plane are called minimum-phase functions. The reason for their name and
some properties of minimum-phase functions are discussed in the next chapter.

An Example To illustrate the general results we have obtained, consider the R-C
network shown in Fig. 8.4-2.
The input admittance function of this network is

1
1 (8.4-16)
Y(s) = -= c, s 2 +=+s);+:( , D > -
V(s> 1 ZP
s+-
ZP

where T~ = R I C l , z2 = R2C2, zP = RpCl, and Rp = R l R 2 / ( R , R2). To better +


observe the results we have obtained, let the values of the resistors and capacitors

-
Fig. 8.4-2 An R-C network.
8.4 PASSIVE SYSTEMS 251

be R, = R2 = 2 x lo5 i2 and C, = C, = lod5 F. For these values, z1 = z2 = 2 s


and zp = 1 s so that

(8.4- 17)
-
-
(s + 0.190983)(s + 1.309017) io-? (T> -1
s+l

The input impedance function of this circuit is

1 s+ 1
Z(s) = =
~

Y(s) (s + 0.190983)(s + 1.309017) lo5, (T > -0.190983 (8.4-18)

Note that the poles and zeros of the impedance function are in the left half of the s
plane. Because the o axis lies in the RAC, we can let s = j w to obtain the input
impedance

(8.4-19)

The real part of the input impedance is

$02+4 1
Re(Z(jo)} = (8.4-20)
(4 - w2)2+(;o)2
Note that Re(Z(jo)) 2 0 for all values of w in accordance with Eq. (8.4-1 1).
There are many other properties of the impedance and admittance functions of
passive systems which can be obtained. For example, it can be shown for lumped
parameter networks that, in Eq. (8.4-15), the degree of the numerator polynomial, m,
and the degree of the denominator polynomial, n, cannot differ by more than one,
which means that the only possible values of m - n are 1, 0, and -1. It also can be
shown that for s = (T +io, Z(o) is real and also Z(a) 2 0 for (T 2 0. Functions with
these two properties are called positive-real @r.) functions. Many properties of
impedance and admittance functions can be derived using the p.r. property. For
example, using the p.r. property, it can be shown that any poles or zeros on the o
axis must be simple. Because the p.r. property is a fundamental property of impe-
dance and admittance functions, the detailed study of p.r. functions is a basic topic of
linear network theory.2

' A n excellent discussion of this topic is contained in Guillemin, E. A,, Synthesis of Passive Networks,
John Wiley & Sons, 1957.
252 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

8.5 THE DIFFERENTIAL EQUATION VIEW OF LUMPED


PARAMETER SYSTEMS
Sometimes, we desire to determine the output of a given physical lumped parameter
system to an input that is applied starting at some time, t = to. If the system is initially
at rest, either we can determine the output by convolution of the input with the system
unit-impulse response or we can determine the output by using Laplace transforms as
discussed in Section 8.1. There are situations, however, in which the system is not
initially at rest. For example, the capacitor voltages and the inductor currents in an
electric network may not be zero initially, or, in a mechanical system, the mass
velocities and the spring forces may not be zero initially. In such cases, it is important
to determine the effect of these initial network values on the output. This determina-
tion is best done using the differential equation relating the system output and input.
The unilateral Laplace transform is usually used for this analysis because the func-
tions in this application are considered to be zero for t < 0. However, because such
functions are a special case of fimctions that are not necessarily zero for t < 0, I have
developed a new procedure to obtain the unilateral Laplace transform as a special case
of the bilateral Laplace transform. This new derivation helps one obtain a better
appreciation of the relation between the unilateral and the bilateral Laplace transform.
We begin with the differential eqiation relating the input and output of a causal
lumped parameter system as given by Eq. (8.3-1):

If only the system function, H(s), of the lumped parameter system is known, this
differential equation can be obtained using the technique discussed in Section 8.3.
For our analysis, we choose the starting time of the input, to, to be zero so that
x(t) = 0 for t < 0. Thus y ( t ) = 0 for t < 0 because a physical system is assumed
causal. We desire to know the output, y ( t ) , for t > 0 given the input x(t) for t 2 0 and
also the initial values of the output, y ( t ) , and its derivatives. The initial values, y(O+),
y'(O+), y"(O+), . . . ,y("-')(O+), are called the initial conditions. Note that the initial
conditions are determined at t = O+ because the initial conditions are the initial rates
of change of the output. They are determined from the network initial values such as
the capacitor voltages and inductor currents in the case of an electric network.
To illustrate our discussion so far, consider the R-C circuit shown in Fig. 8.4-2.
We desire to determine the terminal voltage, u(t), to an input current, i(t), which is
zero for t < 0 for the case in which the initial voltage of capacitor CI is El and the
initial voltage of capacitor C2 is E2. From Eq. (8.4-16) we have
8.5 THE DIFFERENTIAL EQUATION VIEW OF LUMPED PARAMETER SYSTEMS 253

We then obtain by cross-multiplying,

1
V(S) = SZ(S) + -Z(s) (8.5-2b)
TP

In accordance with our discussion in Section 8.3, the corresponding differential


equation is

1
+ -i(t)
= i’(t) (8.5-3)
TP

For our problem, the input is i(t), the output is u(t), and the initial conditions are
u(O+) and u’(O+). To determine the initial conditions, we note from Fig. 8.4-2 that
u(t) is the voltage across the capacitor C2 so that

u(O+) E2 (8.5-4)

Also, because u(t) is the voltage across the capacitor C2, we have

1
4 t ) =-q2(t) (8.5-5a)
c
2

where q2(t)is the charge on the capacitor C2. Consequently,

1 1
u’(t) = -q$(t) = -i 2 ( t ) (8.5-5b)
c
2 c
2

where i2(t) is the current through the capacitor C2. Thus the initial rate of change of
the voltage, u(t), is

u’(O+) = -
1 i,(O+) (8.5- 5 ~ )
c
2

Now, because the sum of the currents at any node is zero, we have

i2(O+) = i(O+) - E2 -E1


~ (8.5-5d)
R2

Consequently,

(8.5-6)

The initial conditions for this example are u(O+) and u’(O+) given by Eqs. (8.5-4)
and (8.5-6), respectively. For this example, our problem is to determine the output,
254 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

u(t), for t > 0, by solving the differential equation, Eq. (8.5-3), from knowledge of
the initial conditions and the input, i(t), for t 2 0.
It is important to observe that if i(t) contains an impulse at t = 0, then u(O+)
would not be E,. Rather, with the use of Eq. (8.5-5a) in accordance with our
discussion of Eq. (8.4-2), u(O+) would then be equal to E, plus the area of the
current impulse divided by C,. Because we are specifying u(O+), we consider i(t) not
to contain an impulse at t = 0, and the effect of any impulse applied to the circuit at
t = 0 is taken into account by the initial conditions at t = O+. Accordingly, as stated
above, the effect of any impulse applied at t = 0 is taken into account by the initial
conditions at t = 0+, and we consider our functions not to contain an impulse at
t = 0.
The Laplace transform simplified our previous analyses of LTI systems, and so
we expect that its use in problems with nonzero initial conditions also will simplify
the solution for the output, y(t). From our discussion and the example above, observe
that we are only interested in the solution of the differential equation, Eq. (8.5-l), for
t > 0 because the functions are assumed zero for t < 0 and their initial values are
given.
To obtain the Laplace transform of Eq. (8.5-l), we need to reexamine the Laplace
transform of the derivative of a finctionf(t), which is zero for t < 0. Thus we let
f(t) = 0 for t < 0. Also, because the effect of any input impulse at t = 0 is taken into
account by the initial conditions, we only consider time functions,f(t), which do not
contain an impulse at t = 0. Then, from our discussion of the derivative of a
discontinuous function in Section 3.3, the derivative off (t) is

(8.5-7a)

wherefi(t) is defined to bef’(t) minus any impulse that may be at t = 0.


In accordance with ow sub-plus notation, note thatf+(t) =f(t) becausef(t) does
not contain an impulse t = 0. The Laplace transform off(t) is F(s) with the RAC
r~ > go because f ( t ) = 0 for t < 0. Thus the Laplace transform of f+(t) is
F+(s) = F(s) also with the RAC r~ > go.
Now, the solution of Eq. (8.5-7a) forf;(t) is

With the use of the Laplace transform differentiation property, property 6 in Table
7.4-2, the Laplace transform offi(t) is

We now can determine the Laplace transform of the derivative off;(t). For this,
we note that f;(t) is a function which is zero for t < 0 and which does not contain an
8.5 THE DIFFERENTIAL EQUATION VIEW OF LUMPED PARAMETER SYSTEMS 255

impulse at t = 0. Thus, similar to the derivation of Eq. (8.5-8), we obtain


9 { f ; ' ( t ) } = s2F+(s) = s[sF(s)-f(O+)] -f'(O+)
(8.5-9)
= s2F(s) - sf(O+) -f'(O+), d > do

because, in accordance with our sub-plus notation, f$(O+) =f'(O+). Continuing in


this manner, we obtain
=Wf3>1= s3F+(s)= s[s2F(s>- sf@+> -f'(O+>l -f"@+>
(8.5-10)
= s3F(s) - s2f(O+) - sf'(O+) -f"(O+), fJ > 00

The general result that is developing is seen to be3


{f$%)) = s"F+(s>
= s"F(s) - s"-'f(O+) - s"-2f'(O+) - . . . -f'"-')(O+), 0 > 60
(8.5-1 1)
We now use these results to determine u(t) for t > 0 of our example by using
Laplace transforms to solve Eq. (8.5-3). For t > 0, u(t) = u+(t) and i(t) = i+(t) so
that we can express the differential equation as

(8.5- 12)
7172

Now using the concepts developed in Section 8.3 and the results we have just
developed, the Laplace transform of this equation is

C*[? V(s)- su(O+) - v'(O+)] + c, (:p- + - TlJ


[sV(s)- U ( O + ) ] + ~

7172
c 2
V(s)

1
= [sZ(s) - i(O+)] + -I(s) (8.5-13)
5
Also, the RACs are to the right of all the poles of V(s)and Z(s) because u(t) and i(t)
are zero for t < 0. Equation (8.5-13) is a simple algebraic equation, for which the
solution for V ( s ) is
1

'This development from the bilateral Laplace transform is called the O+ form of the unilateral Laplace
transform.
256 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

To illustrate this general result for our example, let the circuit values be those used in
Section 8.4 for which C2 = lo-' F, z1 = z2 = 2 s, and zp = 1 s. For these values,
Eq. (8.5-14) is

The specific values for the initial conditions are obtained from Eqs. (8.5-5). We
observe from this equation that the system function for our example is

s+l s+ 1
+ ++
H(s) = lo5 - > -0.191 (8.5-16)
s2 $s lo5 (s + +
0.191)(s 1.309) '
0

in which the RAC is to the right of the poles because the system is causal. Thus the
system is stable because the o axis lies in the RAC.
To illustrate this result, we determine the voltage, u(t), for a current input which is
i(t) = Au(t). For this step input, we have I(s) = A / s with the RAC G > 0 so that

where i(O+) = A for this case. The partial fraction expansion of this expression is

1.171u(O+) + 0.894[~'(0+)- i(O+>]


s + 1.309
+
- 0.171~(0+) 0.894[~'(0+)- i(O+)] ,
s + 0.191 1 >

(8.5-18)

The RAC is as given because, as we have discussed above, it must be to the right of
all the poles. With the use of entry 2a of Table 7.4-1, the inverse transform of this
equation is

A
~ ( t=) -[4 - 3.789e-0.'91' - 0.21 le-1.309t]~(t)
10-5 (8.5-19)
+
- [1.171~(0+) 0.894[~'(0+)- i(0+)][e-0.191'- e-1.309'Iu@>

Observe that u(t) is the sum of two terms: The first term is due only to the input,
and the second term is due only to the initial conditions. If the initial conditions were
zero, then the second term would be zero and u(t) would be equal to the first term.
For this reason, the first term is called the zero-initial condition response (it also is
called the zero-state response). If the input were zero, then the first term would be
zero and u(t) would be equal to the second term. For this reason, the second term is
8.5 THE DIFFERENTIAL EQUATION VIEW OF LUMPED PARAMETER SYSTEMS 257

called the zero-input response. We thus note that v(t) is equal to its zero-initial
condition response plus its zero-input response. This decomposition of v(t) is
already seen in Eq. (8.5-14). This is due to the result seen in Eq. (8.5-13), that
the Laplace transform of v+(t) and its derivatives in Eq. (8.5-12) are composed of
terms that involve only V ( s ) plus terms that involve only the initial conditions.
Observe that this result is valid even for the general case of systems described by
Eq. (8.5-1). Thus we have the important result:

The response of an LTI system, y(t), always can be expressed as the sum of its zero-
initial condition response and its zero-input response.

Both terms in Eq. (8.5-14) have the same denominator as a result of dividing by
the coefficient polynomial of V ( s ) in Eq. (8.5-13) to obtain Eq. (8.5-14). Note that
this result is also true in the solution of the general case given by Eq. (8.5-1). As we
discussed in Section 8.3, the roots of this polynomial are the system function poles.
Thus we note that, in general, the poles of zero-input term are the system function
poles. If the causal system is stable, then, in accordance with our results in Section
8.2, these poles must be in the left half of the s plane while the RAC must be to the
right of all the poles. Thus with the use of the results in Sections 7.3 and 7.4, the
inverse transform of the zero-input term for stable systems is seen to be composed of
the sum of terms that decay exponentially with time at an exponential rate that is
determined by the system pole locations. Consequently, the effect of any initial
conditions of a stable system decays with time.
Let us now consider the case for which current input of our example in this
section is i(t) = A cos(2t)u(t). For this input, from Table 7.4-1,
S
I(s) = A ~ o>o (8.5 -20)
s2+4'

From Eq. (8.5-15) we then have

(s + 1)s (s + $)v(O+) + v'(O+) - @+I


v ( ~=)1 0 5(9
~
+ 3s + +)(9 + 4) + s2 + $ s ++

CT >0 (8.5-21)

The first term is the Laplace transform of the zero-initial condition term, and the
second term is the Laplace transform of the zero-input term. We shall consider each
term separately.
The inverse Laplace transform of the zero-input term in Eq. (8.5-21) is, from Eq.
(8.5-19),

[ 1.171v(O+) + 0.894[~'(0+)- i(0+)][e-0.'91' - e-1.309]u(t) (8.5-22)

This is the system response due to the initial conditions. Observe that this term is
zero if the initial conditions are zero. Note that the zero-input term decays exponen-
258 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

tially with time at an exponential rate determined by the system pole locations,
similar to our result for the previous case given by Eq. (8.5-19).
To obtain the inverse Laplace transform of the zero-initial condition term, we
obtain the partial fraction expansion of the first term of Eq. (8.5-21), which is

(s + 1)s - (s + 1)s
105~
(s2 + + i)(s’ + 4) - l oSA(~2+ 4 ) ( ~+ 0.191)(s + 1.309)
$9

= ioia[ 0.09756s + 0.91057 - 0.03424


s2+4 s + 0.191
-
+ 1
1.309
s0.06332
(8.5-23)

with the RAC D > 0. The fmt term of this partial fraction expansion is really the
sum of two terms:

c,+---C2 -
Ds-E
o>o (8.5-24a)
s+j2 s-j2 s*+4’

+
in which D = C1 C2 and E =j2(C1 - C2). However, it is not necessary to deter-
mine C, and C2 as discussed in Section 7.4 to obtain D and E. The real constants D
and E can be determined directly by multiplying both sides of the partial fraction
+
expansion equation by (s2 4) and letting s =j 2 to obtain

(8.5-24b)

By equating the real parts and the imaginary parts of Eq. (8.5-24b), we obtain two
equations that are solved for the real constants D and E. The desirability of the
summed form is that its inverse Laplace transform is easily obtained from Table 7.4-
1. Note from entry 4 of that table that the Laplace transform of

~ ( t=) A COS(CO,~
+)u(t) + (8.5-25a)

is

S C O S ~- o , s i n 4
X ( S )= A , o>o (8.5-25b)
S2 + CO;
Then, by equating the numerators in Eq. (8.5-24a) and Eq. (8.5-25b), we obtain

D=Acos$ and E = A o , s i n 4 (8.5-26a)


8.5 THE DIFFERENTIAL EQUATION VIEW OF LUMPED PARAMETER SYSTEMS 259

We can determine A and fp directly from Eq. (8.5-26a) as

D
fp = arctan(&) and A = - (8.5-26b)
cos fp

With this result and using wo = 2 for our example, the inverse Laplace transform of
the zero-initial condition term given in Eq. (8.5-23) is

+
465624 cos(2t - 1.36)u(t) - [0.034e-0.191‘ 0.063e-‘.309‘]u(t) (8.5-27)

The zero-initial condition term is seen to consist of a sinusoidal term plus terms that
decay exponentially with time at an exponential rate determined by the system pole
locations. The exponentially decaying terms are called the transient response. They
arise because the sinusoidal input started at t = 0. The sinusoidal term is called the
steady-state response because the zero-initial condition term approaches
46,562Acos(2t - 1.36) as t increases. You should note that this steady-state
+
response is equal to A IH( j2)l cos[2t L H ( j2)] because this is the system response
to a sinusoid that exists for all time that we determined in Section 4.2.
The result we just obtained for our example can be generalized. Consider the
response, y(t),of the system described by Eq. (8.5-1) to a sinusoidal input. For this,
+
let the input be x ( t ) = A cos(wOt q5)u(t). The Laplace transform of this input is
given by Eq. (8.5-25b). The poles of X ( s ) are on the w axis. Now, the Laplace
transform of the zero-initial condition term is observed to be H(s)X(s).Its partial
fraction expansion is the sum of terms with the poles of H(s) and terms with the
poles of X(s). For stable and causal systems, the inverse Laplace transform of the
terms with the poles of H(s) decay exponentially because those poles are in the left
half of the s plane. The inverse Laplace transform of the terms with the poles of X ( s )
is a sinusoid that does not decay with time because those poles are on the w axis.
Thus we note that, for a sinusoidal input, the zero-initial condition response of a
stable system is composed of terms that decay with time and a sinusoidal term. The
terms that decay with time are called the transient response, and the sinusoidal term
is called the steady-state response. Thus, for a sinusoidal input the response of a
stable system is the sum of a zero-input term that decays with time, a transient term
that decays with time, and a steady-state term that is a sinusoid. Thus, as time
increases, a causal and stable system “forgets” the beginnings of its input, and its
output approaches its steady-state response. Thus, for the sinusoidal input given by
Eq. (8.5-20a), the output for large values o f t would be approximately

Because the system “forgets” the beginnings of its input, the system response
approaches its response to a sinusoid that exists for all time which we determined
in Section 4.2. This is a frequency-domain interpretation of our time-domain discus-
sion in Section 3.5.
260 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

PROBLEMS

8-1 Use convolution to show that in Example 3 of Section 8.1, an LTI system
with the unit-impulse response h , ( t ) and one with the unit-impulse response
h2(t) have the same output for the given input.

8-2 For each input, x(t), and unit-impulse response, h(t), given below, determine
the Laplace transform of the system responses, Y(s).Do not forget to specify
the RAC.
(a) x(t) = u(t) and h(t) = e-"u(t)
(b) x(t) = ecb'u(t) and h(t) = e-"u(t)
(c) x ( t ) = u(t) and h(t) = e-" cos(o,t)u(t)

8-3 The impulse response of an LTI system is h(t) = d ( t ) - 2e-2tu(t).The system


input is x ( t ) = Eu(t).
(a) Determine the system response, y(t), by convolution. Then determine
Y(s). Do not forget to specify the RAC.
(b) Determine the system function, H(s). Do not forget to specify the RAC.
(c) Use the convolution property of the Laplace transform to determine the
Laplace transform, Y(s),of the system response. Do not forget to specify
the RAC. To determine the RAC, note from the convolution property that
its RAC must overlap those of x ( t ) and h(t). Does your result in part a
agree with that in this part?

8-4 For the input x ( t ) = e-"'u(t), a > 0, the response of an LTI system is
y(t) = r ( t / T ) . Determine the system unit-impulse response, h(t).

8-5 The unit-impulse response of a given LTI system is h(t) = e-"u(t), M > 0.
(a) Determine the system response, y(t), to the input

x ( t ) = sgn(t) =
I -1
0
1
fort =. 0
for t = 0 .
fort > 0

For this determination, use the method described in Example 1 of Section 8.1.
+ +
@) Observe that 1 x ( t ) = 2u(t) so that H(0) y(t) = 2s(t) in which s(t) is
the system unit-step response. Verify this expectation.

8-6 The unit-impulse response of an LTI system is h(t) = d ( t ) - e-2tu(t). For a


given input, x(t), the output is observed to be y(t) = [e-*' - ec3']u(t).
(a) Determine all possible inputs that would produce the observed output.
(b) Which of the possible inputs would be the actual one if it is known that
the actual input waveform is bounded.
PROBLEMS 261

8-7 The unit-impulse response of an LTI system is h(t) = Kr(t/T). Use Laplace
transforms to determine the system response, y(t), to the input
x ( t ) = A r ( t / T ) . Compare your result with that of Example 3 of Section 2.5.

8-8 In an experiment, the output of a causal LTI system is y ( t ) = e-21sin(4t)u(t)


for the input x(t) = e-2'u(t). Determine the system unit-impulse response,
h(t).
8-9 Determine whether a stable and causal inverse of each system described
below exists. If a stable and causal inverse does not exist, state why not; if it
exists, determine its unit-impulse response.
(a) h,(t) = d ( t ) - eP2'u(t).
(b) hb(t)= eP2'u(t).
(c) h,(t) = d ( t - 2).
8-10 For each system function given below, determine the RAC if it is known that
the system is ( I ) stable, ( 2 ) causal.

8-1 1 The system function of a given LTI system is H ( s ) = (1 + e-.')/(s2+ n2),


-00 < 0 < 00.
(a) Show that the given RAC is proper and determine the system unit-
impulse response.
(b) Is the given system causal? Stable?
(c) Can this system function be that of a lumped parameter system?

8-12 The system function of an LTI system is ~ ( s =


) (s + +
1)3/(s 2>*,0 > -2.
(a) Is the given system causal? Stable?
(b) Determine a differential equation relating the system input, x(t), and
output, y w .

8-13 The system function of a given LTI system is


+
) (seP 2)/(s2
~ ( s= + +
2s 1)e-2s, 0 > - I.
(a) Determine a differential equation relatingthe system input. x(t), and
output, y ( t ) .
(b) Is the given system stable?
(c) Determine the system unit-impulse response, h(t). Is the system causal?

8-14 The unit-impulse response of an LTI system is h(t) = e-,' COS(W,$ $)u(t). +
Determine a differential equation relating the system input, x(t), and output,
Y(t).
262 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

8-15 Determine the system function for each differential equation given below
relating the input, x(t). and output, y(t), of a causal LTI system.
+ +
(a) y”(t) 2y’(t) 3y(t) = ? ( t ) +
2x(t)
(b) f(t) + + +
2y’(t) 3y(t) = X”(t - 2) 2x(t - 1)
+ +
(c) f ( t - 1) 29(t - 1) 3y(t - 1) = xlyt - 3) 2x(t - 4) +
+
8-16 The system function of a given LTI system is H(s) = l/(s l), G > - 1.
(a) Determine a differential equation relating to the system input, x(t), and
the system output, y(t).
+ + +
(b) Note that H(s) = (s 2)/(s l)(s 2), G > -1 is the same system
function because the pole and zero at s = -2 cancel. However, the
differential obtained is different. Obtain the differential equation relating
the system input, x(t), and the system output, y(t).
(c) Show that we obtain the differential equation in part b by adding twice
the differential equation in part a to its derivative. To obtain differential
equation of the lowest possible order, we should always reduce H(s) by
canceling all common factors before determining a differential equation
relating to the system input, x(t), and the system output, y(t).

8-17 The system function of a given stable LTI system is


+ +
H ( s ) = s(s - 3 ) / ( s 3)(s 1).
(a) Is the given stable system causal? Your reasoning must be given.
@) Determine h(t), the system unit-impulse response.
(c) Determine a differential equation relating the system input, x(t), and
output, y(t) of the given system.
+
(d) Determine the system response, y(t), to the input x(t) = A B sin(3t).

8-18 The unit-impulse response of an LTI system is h(t) = A r ( t / T ) , T > 0.


(a) Is the given system causal?
(b) Is the given system stable?
(c) Can the given system be realized by a lumped-parameter system?

8-19 A simple pendulum with linear damping can be modeled by the differential
equation

$”(t) + a$’(t) + b sin $ ( t ) = f ( t )


where $ ( t ) is the angle of the pendulum swing andf(t) is an externally
applied torque. For small angles $(t), we use the approximation
sin $ ( t ) x $ ( t ) to obtain the linear differential equation
PROBLEMS 263

(a) What is the maximum value of the angle, $, for which sin$(t) x $(t)
with an error less than I%? Less than 5%?
Use the approximation to determine the system response, $(t),to the
input torquef(t) = A r ( t / T ) :
(b) For the case in which a = 2 and b = 5.
(c) For the case in which a = 2 and b = 1 .

8-20 The system function of the echoing system discussed in Section 1.6 is
H ( s ) = 1/( 1 - Ke"9 with the RAC to the right of the most right-hand pole.
We first determine the location of the poles. they are at those values of s for
which K ~ C '= . ~1.
(a) To solve this equation, we have e-'s = e-T(u+Jo)= e-rue-Jrw= 1 / K .
Show that the solutions of this equation are

s = -In
1 IKI + j - ifK>O
z

for k = 0, f l , f 2 , . . . so that there are infinitely many poles uniformly


distributed parallel to the cc) axis. Because the echoing system is causal, the
RAC is to the right of all the poles in accordance with the RAC of functions
that are zero for t < 0, discussed in Section 6.2.
(b) For what values of K is the echoing system stable?
(c) To determine the inverse transform, we expand H(s) in a power series.
Show that the power series expansion is

M M 1
H ( s ) = C(Ke-")" = Kfle-flrs, 0 > I l n IKI
fl=O fl=O z

(d) Thus show that h(t) = Kf16(t- nz).

8-2 1 The unit-impulse response of an LTI system is h(t) = e-"'u(t), a > 0. The
system inputs is x(t) = cos(wot)u(t). As discussed in Section 8.5, since the
system is stable, the system response approaches the steady-state response as
t increases.
(a) Determine the system response, y(t).
(b) Show that y(t) approaches the steady-state response as t increases.

8-22 The differential equation relating the input, x ( t ) , and output, y(t), of a given
circuit is

y"(t) + 2y'(t) +y(t) = 5x(t)


The initial values of the capacitor voltages and inductor currents are such that
y(O+) andy'(O+) are not zero.
264 LAPLACE TRANSFORM ANALYSIS OF THE SYSTEM OUTPUT

(a) Use Eq. (8.5-11) to determine the system response to the input
~ ( t=) A ~0~(2t)u(t).
(b) Show that the steady-state response is equal to the system sinusoidal
response.
(c) From your solution in part a, determine the initial conditions for which
there is no transient response so that y(t) is equal to the steady-state
response.

8-23 The differential equation relating the input, x(t), and output, y(t), of a given
+ + +
circuit is y”(t) ay’(t) by(t) = cx’(t) dx(t) in which the constants are
determined by the values of the circuit parameters. The initial values of the
circuit voltages and currents result in the initial conditions being y’(O+) and
Y(O+).
(a) Determine the initial conditions required for there to be no transient.
(b) For a given set of initial conditions, determine the iqput, Xb(t), for which
y(t) = 0 for t > 0.
(c) We would like the system response to be yc(t)= h(t) * xc(t) in which h(t)
is the system unit-impulse response. That is, yc(t) is the system zero-
initial condition response. Show that the required system input is
+
x(t) Xb(t) x,(t).
S-PLANE VIEW OF GA N AND PHASE
SHIFT

9.1 GEOMETRIC VIEW OF GAIN AND PHASE SHIFT

Some uses of the Laplace transform in LTI system analysis were illustrated in the last
chapter. Another important use of the Laplace transform which we discuss in this
chapter is the analysis of the gain and phase shift of stable, lumped parameter LTI
systems directly in the s plane. The importance of this analysis technique is that it
results in a physical view of the relation of the system poles and zeros to the system
gain and phase shift which is one of the bases of filter design.
We showed in Section 8.3 that the system function of a stable lumped parameter
LTI system has the form

in which, from our discussion of stability in Section 8.2, m 5 n and the RAC
includes the o axis. Because the w axis lies in the RAC, we can let s =j w in Eq.
(9.1-1) to obtain the transfer function

(9.1-2)

Thus the system gain is

(9.1-3a)

265
266 S-PLANE VIEW OF GAIN AND PHASE SHIFT

and the system phase shift is

The constant la,/b,,l in Eq. (9.1-3a) is called the gain constant because it is a
constant that affects the size but not the shape of the graph of the gain versus
frequency, and La,/b, in Eq. (9.1-3b) is called the phase constant because it is
just an additive constant that does not affect the shape of the graph of the phase shift
versus frequency. As discussed in Section 8.3, the coefficients a, and b, are real
numbers because they are the coefficients of the differential equation relating the
system input and output. Thus the ratio am/bnis a real number so that the phase
constant is zero if the ratio is a positive real number and .n radians (180") if it is a
negative real number. Other than the gain and phase constants, the determination of
the gain and the phase shift from Eqs. (9.1-3) requires the determination of the
magnitude and angle of terms of the form (jo= so) in which so = z k , or so = P k
in Eq. (9.1-2). Although these quantities can be determined algebraically, we shall
develop a geometrical s-plane interpretation of them which will enable us to obtain
insight into how the system gain and phase shift is affected by the system function
poles and zeros.
To develop the geometrical interpretation, consider the s-plane diagsm shown in
Fig. 9.1-la. The vector 2 is from the point so to the origin, the vector b is from the
origin to the point coo on the o axis, and the vector f is from the Bint so to the
+
point ooon the o axis. The relation between the three vectors, a , b , and 2,is

-c =+b +
+ a
+ (9.1-4a)

Now note that the algebraic expressions for the vectors are

+ +
a = -so and b =jwo (9.1-4b)

..........
0
io0

b
so

CI cr

(a> (b)
Fig. 9.1-1
9.1 GEOMETRIC VIEW OF GAIN AND PHASE SHIFT 267

so that substituting these equations in Eq. (9.1-4a) we have


+
c =jwo -so (9.1-4c)

As shown in Fig. 9.1-1b, we observe that the length of the vector f is equal to the
magnitude of (jo,- so) and the angle of the vector f from the positive cr axis is
equal to the angle of ( j w , - so).That is, the magnitude of ( j w - so) is equal to the
distance from the point so to the point wo on the w axis and the angle of ( j w - so)is
equal to the angle from the positive real axis to the line from the point so to the point
wo on the w axis.
In terms of this geometric interpretation, the system gain given by Eq. (9.1-3a)
and the system phase shift given by Eq. (9.1-3b) can be stated as follows:

1. The system gain at the frequency wo is equal to the gain constant times the
product of the distances from the system function zeros to the point wo on the
w axis divided by the product of the distances from the system function poles
to the point wo on the w axis.
2. The system phase shift at the frequency wo is equal to the phase constant plus
the sum of the angles from the system function zeros to the point wo on the w
axis minus the sum of the angles from the system function poles to the point
wo on the w axis.

We shall illustrate this geometric view by analyzing some basic filter types.

9.1A The One-Pole Low-Pass Filter


We begin by utilizing this geometric view to determine the gain and phase shift of an
LTI system with the system function
a
Hu(s) = - CJ > -b (9.1-5)
s+b’

for the case in which a > 0 and b > 0. To discuss gain and phase shift, the system
must be stable. Thus we require b > 0 for the w axis to lie in the RAC. Figure 9.1-2a
is the s-plane diagram of this system function which has no zeros and only one pole
at s = -b. From Eq. (9.1-3a), the gain of this system at the frequency wo is

(9.1-6a)

Now, from our discussion above, lp = Ijw, + 61 = ,/- so that the gain at the
frequency wo also can be expressed as

(9.1-6b)
268 S-PLANE VIEW OF GAIN AND PHASE SHIFT

Fig. 9.1-2a

0.2 -
0.1 -
00
OO i 6 i s 6 7 i
e 'O-r
Fig. 9.1-2b Graph of the system gain.

-70 -
4Q-

-90 00
0 1 2 3 4 5 6 7 8
' O T
Fig. 9.1-2c Graph of the system phase shift.

That is, the system gain at the frequency oois equal to the gain constant divided by
the distance from the pole to the point ooon the w axis. This distance, ,Ip, increases
as wo increases so that the system gain decreases as the frequency, oo,increases. In
accordance with our discussion in Section 4.2, this means that the system amplifica-
tion of a sinusoid decreases as the frequency of the sinusoid increases. Thus the
system is seen to be a low-pass filter with the graph of system gain versus frequency
as shown in Fig. 9.1-2b.
The dc gain is lal/b because, from Fig. 9.1-2a, lp = b for wo = 0. As discussed in
Section 4.3, the width of the passband region is usually specified as the distance to
the frequency at which the gain is l / f i of its maximum. This is called the half-
power bandwidth or, equivalently, the 3-dB bandwidth of the low-pass filter. Because
the maximum gain is lal/b at which lp = b, the gain is l / f i of its maximum value
at the frequency for which lp = b f i . With the use of the Pythagorean theorem, the
hypotenuse of the right angle triangle in Fig. 9.1-2a is b& when the length of the
triangle vertical leg is b. Thus the 3-dB bandwidth of the low-pass filter is b because
9.1 GEOMETRIC VIEW OF GAIN AND PHASE SHIFT 269

the length of the vertical leg is wo.Now note that for wo >> b, the hypotenuse length,
lp, is approximately equal to the length of the triangle vertical leg, wo. Thus for
wo >> b, an approximate expression for the gain is lal/wo.
Now, from Eq. (9.1-3b), the system phase shift at the frequency wo is

L H,(jwo) = L u - +
L( j ~ o6) (9.1 -7a)

In accordance with our geometric view, it can be expressed as

L H,(jw,) = i a - 8 (9.1-7b)

where 8 is the angle from the pole at s = -b to the point coo on the w axis as shown
in Fig. 9.1-2a. Because a is a real number, we have

La= { 0
n rad
ifa>O
if a < 0
(9.1 -7c)

For our example, a > 0 so that La = 0. Thus the phase shift for the given system is

(9.1-7d)

Figure 9.1-2c is a graph of the system phase shift versus frequency for the given
system. Observe that the system phase shift is -n/4 rad (or -45") at wo = b
because both legs of the right-angle triangle in Fig. 9.1-2a are equal for wo = b.
A good approximation for the phase shift at low frequencies can be obtained by
using the approximation tan 8 M 8 if 8 is small. Thus, in Fig. 9.1-2a, for small values
o f 0 we have

w0
8 M tan8 = - (9.1 -7e)
b

Thus the phase shift at low frequencies is

(9.1-7f)

This is the equation of a straight line with the slope equal to - 1/b, as can be seen in
Fig. 9.1-2c. We also can obtain an approximation for the system phase shift at high
frequencies. For this, we note from Fig. 9.1-2a that 0 = 7112 - 4. For 8 x 7112, $ is
small so that $ x tan 4 = b/wo. Thus an approximation for the system phase shift
for <o0>> b is

71 n b
iH,(jwo)=-e=--+$x--+- (9.1-7g)
2 2 0 0
270 S-PLANE VIEW OF GAIN AND PHASE SHIFT

Thus, for high frequencies, the distance in Fig. 9.1-2c from the phase-shift graph to
-n/2 is approximately equal to b/wo.

9.1 B The One-Pole High-Pass Filter


We now use the geometric view to determine the gain and the phase shift of an LTI
system with the system function

as
H&) = - CT > -b (9.1-8)
s+b'

for the case in which a > 0 and b > 0. Figure 9.1-3a is the s-plane diagram of this
system function which has one zero at s = 0 and one pole at s = -b.

I"

Fig. 9.1-3a

Fig. 9.1-3b Graph of the system gain.

20-

10

0
- - -
00
1 2 3 4 6 6 7 0
= l o b
Fig. 9 . 1 3 ~ Graph of the system phase shift.
9.1 GEOMETRIC VIEW OF GAIN AND PHASE SHIFT 271

Using our geometric view, the gain at the frequency oois

IHb(jwO)l =
4 (9.1-8a)
lP

which is equal to the gain constant, la[,times I,, which is the distance from the zero
at s = 0 to ooon the o axis, divided by lp, which is the distance from the pdle at
s = -b to w,, on the o axis so that

(9.1-8b)

With the use of Fig. 9.1-3a, note that Eq. (9.1-8a) also can be expressed as

As ooincreases, 6 increases from 0 to 7112 rad so that sin 6 increases from zero to
one. Thus a graph of the system gain versus frequency is as shown in Fig. 9.1-3b.
The system is seen to be a high-pass filter whose gain is zero at oo= 0; as o
increases, the system gain increases monotonically and approaches la1 asymptoti-
cally. The frequency at which the gain is l / d the maximum gain is the frequency
for which 6 = 7114 rad at which 1, = b. Thus the frequency at which the gain is
I / & its maximum value (the half-power frequency) is w,, = b.
A low-frequency approximation of the gain can be obtained by noting that, for
small values of 8, sin 6 M tan 6 so that

This is the equation of a straight line with the slope lal/b as seen in Fig. 9.1-3b.
Now, from Eq. (9.1-3b), the system phase shift at the frequency is

LHh(jw0) = La + L(jw0) - L ( j 0 0 + b)
n (9.1-8e)
= i a + -2- e
Geometrically, the phase shift is equal to the phase constant, La,plus 7112 which is
the angle from the zero to the point o,,on the o axis, minus the angle from the pole
at s = -b to the point ooon the o axis. Figure 9.1-3c is a graph of the phase shift
versus frequency.
By comparing Eq. (9.1-7a) with Eq. (9.1-8e), we note that
71
LHb(jO,,) = -
2 + LH,(jw,) (9.1-8f)

so that the phase-shift graph of the one-pole high-pass filter is that of the one-pole
low-pass filter moved up by n/2 rad. This is due to the zero at s = 0.
272 S-PLANE VIEW OF GAIN AND PHASE SHIFT

We shall use our geometrical understanding of gain and phase shift to examine
some different basic filter types in the following sections.

9.2 THE POLE-ZERO PAIR

The pole-zero pattern of the high-pass filter discussed in the last section is a special
case of a pole-zero pair. The analysis of the gain and the phase shift associated with a
pole-zero pair in which both the pole and the zero are on the o axis is important
because a number of different filter types can be obtained with different configura-
tions of the pole-zero pair. Also, the analysis of the gain and the phase shift asso-
ciated with a pole-zero pair also well illustrates the geometric view and some of the
useful analysis techniques that can be used to obtain numerical results. Conse-
quently, we shall analyze this case in detail and then consider some extensions of
our results to more general pole-zero patterns.

9.2A The High-Pass Case


We begin by determining the gain and phase shift of an LTI system with the system
function

(9.2-1)

where CI > /3 > 0. Figure 9.2-la is the s-plane diagram of this system function which
has one pole at s = -a and one zero at s = -/3.

Fig. 9.2-la

: I OO

Fig. 9.2-lb Graph of the system gain for fi = a.


9.2 THE POLE-ZERO PAIR 273

In accordance with our discussion in Section 9.1, the gain of the given system at
the frequency oois

(9.2-2)

Note from the figure that I,, the distance from the zero to the point ooon the o axis,
is less than lp, the distance from the pole. The two distances become equal asymp-
totically as the frequency wo increases. From this simple observation, we can imme-
diately conclude that the gain rises monotonically from a value of p / a at o = 0 and
is asymptotic to a value of one as o -+ 00 so that the gain curve must have the shape
shown in Fig. 9.2-lb.
To determine an expression for the gain curve, we have from Fig. 9.2- 1a with the
use of the Pythagorean theorem that

1; = o i + p2 and +
1; = o; a2 (9.2-3a)

so that the gain can be expressed as

(9.2-3b)

The half-power frequency is the frequency at which the gain is l / f i of its


maximum value. We note from our graph that because the maximum gain is one,
there is a half-power frequency only if P / a -= 1/aor, equivalently, a > 2p. We
assume this is the case for our present analysis. Because the gain is l / f i at the half-
power frequency, we want to determine the frequency, wo, at which

(9.2-4)

The solution of this equation for oois

(9.2-5)

Table 9.2-1 is a short table of the half-power frequencies for some values of the ratio
PIE.
Another frequency of interest in our analysis is the frequency at which the gain is
equal to &(p/ a) (the double-power frequency). The double-power frequency can be
274 S-PLANE VIEW OF GAIN AND PHASE SHIFT

TABLE 9.2-1 Half- and Double-Power Frequencies for


Some Values of the Ratio B / a
P/u Half-Power Frequency Double-Power Frequency
0 U 0.00
0.1 0.99~ 0.1OE
0.2 0.96~ 0.2 1u
0.3 0.9 1u 0.33~
0.4 0.82~ 0.49~
0.45 0.77~ 0.58~
0.5 0.71~ 0.71~
0.55 0.63~ 0.88u
0.6 0.53a 1.13~
0.65 0.39~ 1.65~
0.7 0.14~ 4.95E
lIJ2 0.00 rn

determined in the same manner we used to determine the half-power frequency. For
this, we have from Eq. (9.2-3b), that, at the double-power frequency,

(9.2-6)

The solution of this equation for wo is

B
wo =
/qJ (9.2-7)

The double-power frequencies for some values of ...e ratio p / a also are listed in
Table 9.2-1.
We now determine the system phase shift. From our discussion in Section 9.1, we
have that the phase shift, 4, is

4 = e, - e, (9.2-8a)

Note from Fig. 9.2-la that 4 = 0 at w = 0 and that 4 +-0 asymptotically as


w -+ GO. Furthermore, because 0, > e,,
we have that 4 > 0 at all frequencies.
From these observations, we conclude that the phase-shift graph must have the
shape shown in Fig. 9.2-2.
9.2 THE POLE-ZERO PAIR 275
40

0 1 2 3 1 5 6 7 e 9 10 -
Q)O
a
Fig. 9.2-2 Graph of the system phase-shift for p = f a .

To determine the graph quantitatively, we have from Fig. 9.2-la that

0 0 00
tan(9,) = - and tan(9,) = - (9.2-8b)
B a

we substitute these relations into Eq. (9.2-Sa) to obtain

(9.2-8~)

Because tan0 zx 0 for small values of 9, we have from Eq. (9.2-8c) that for small
values of oO

(9.2-8d)

so that near zero frequency, the system phase-shift curve is a straight line with the
slope ( a - B)/a/l. The maximum of the system phase-shift curve is at the frequency
for which the derivative of Eq. (9.2-8c) is zero. Making use of the derivative formula

d ' 1 dx
--tan- x=-- (9.2-8e)
d9 1+x2d0

we obtain that system phase-shift curve is a maximum at the frequency wO= ,/$.
From Eq. (9.2-8c) the system phase shift at this frequency is

d,,, = tan-' (&) - tan-' (g) = 71 - 2 tan-'


2
(E) (9.2-8f)

The maximum phase shift and the corresponding frequencies for some values of the
ratio B / a are as listed in Table 9.2-2.
276 S-PLANE VIEW OF GAIN AND PHASE SHIFT

TABLE 9.2-2 The Maximum Phase


Shift and the Corresponding
Frequencies for Some Values of the
Ratio p/ol

0 90 0
0.1 54.9 0.32~
0.2 41.81 0.45~
0.3 32.58 0.55~
0.4 25.38 0.63~
0.45 22.29 0.67~
0.5 19.47 0.7 1u
0.55 16.88 0.74~
0.6 14.48 0.77~
0.65 12.25 0.81~
0.7 10.16 0.84~
1142 9.88 0.84~

9.2B The Low-Pass Case


We now consider the case in which the positions of the pole and zero are inter-
changed. For this, we consider an LTI system with the system function

(9.2-9)

where a > p > 0. Figure 9.2-3a is the s-plane diagram of this system function which
has one zero at B = -a and one pole at B = -p. Observe that H&) = l/Hu(s).
Consequently, the system gain is

Because the gain of system B is the reciprocal of the gain of system A , the gain curve
must be as shown in Fig. 9.2-3b, which is the reciprocal of the gain curve of
Fig. 9.2-lb.
Furthermore, the system phase shift is

iHb(jW0) = -LHuO’w,) (9.2- 1Ob)

Because the phase shift of system B is the negative of the phase shift of system A , the
phase-shift curve must be as shown in Fig. 9.2-3c, which his the negative of the
phase-shift curve of Fig. 9.2-2.
9.2 THE POLE-ZERO PAIR 277

1 -

0 6 -

Fig. 9.2-313 Graph of the system gain for = 4 c(.

1 2 3 4
-
a

Fig. 9.1-3c Graph of the system phase shift for p = c(.

9.2C Right-Half-Plane Zero Case


We now consider the case in which there is a zero in the right half of the s plane. For
this, we consider an LTI system with the system function

(9.2-1 1)

where a > 0 and b > 0. Figure 9.2-4a is the s-plane diagram of this system hnction
which has one zero at (T = b and one pole at (T = -a.
To determine the system gain curve, we first consider the case in which a > b.
Note for this case that, for b = fi and a = a, the lengths 1, and lp in Fig. 9.2-4a are
the same as that in Fig. 9.2-la. Thus the system gain, which is IHc(joo)l = I Z / I p , is
the same as that for system A , which is a high-pass filter.
278 S-PLANE VIEW OF GAIN AND PHASE SHlFl

I" "0

A.

-a b" 0

rea

8 -

Fig. 9.2-4b Graph of the system phase shift for b = t u .

Now consider the case in which b > a. Compare the s-plane diagram for this
case, Fig. 9.2-4a, with the s-plane diagram for system B, Fig. 9.2-3a. Note that for
a = j?and b = a, the lengths 1, and Ip in the two figures are the same, so that the gain
curve of system C for this case is the same as that of system B, which is a low-pass
filter.
Finally, consider the case in which a = b. For this case, we note from Fig. 9.2-4a
that I, = lp for all values of frequency. Thus, for this case the gain is

2, = 1
IH,(jco,)I= - (9.2-12)
lp
A filter with a gain curve that is a constant is called an all-pass$lter. All-pass filters
are important because, as we shall discuss later, they can be used to obtain a desired
phase-shift curve without affecting the gain curve.
Now, the phase shift of system C at the frequency oois
LH,(joo) = 0, - ep (9.2-13a)

To simplify tracking the difference of two angles as the frequency, coo, is varied, we
+
note from Fig. 9.2-4a that 0, = OP $. With this relation, the system phase shift can
be expressed in terms of only one angle as

(9.2- 13b)

The shape of the system phase-shift curve is seen to be as shown in Fig. 9.2-4b.
9.3 MINIMUM-PHASE SYSTEM FUNCTIONS 279

As the frequency, oo,increases, the system phase shift decreases monotonically


from 180" to zero. The phase shift at the frequency wo = is

j = tan-'
t + (&)
(3) tan-'

= tan-' (8)
+ (8) tan-' = ?rad
2
(9.2-13~)

9.3 MINIMUM-PHASE SYSTEM FUNCTIONS

For our discussion in this section, CI and B are positive real numbers. We saw in the
last section that the gain curve of a system for which

Ha@)= -
s-B (T>-U (9.3- 1a)
S t C I '

is the same as one for which

(9.3- 1b)

Figures 9.3-la and 9.3-lb are the s-plane diagrams of Ha(s)and Hb(s),respectively.
The gain curve of both systems is the same because, for both systems, 1, and Zp
are the same for any value of w,,. The two systems phase-shift curves, however, are

I"

Fig. 9.3-la

-a
A, -P
"O

Fig. 9.3-lb
280 S-PLANE VIEW OF GAIN AND PHASE SHIFT

different. From Figs. 9.3-la and 9.3-lb, the phase shift of the two systems, A and B,
are

LH,(joo) = $, - 0, and LHbOoo)= 0, - 0, (9.3-2)

Thus the different between the phase shifts of the two systems is

Note from the figures that $2 + 0, = x. With this relation, we can express Eq. (9.3-
3a) as

LH,O’oo) - LH,(joo) = x - 26, (9.3-3b)

From Fig. 9.3-lb, we see that 6, < 7112 for any frequency, oo.Consequently, we
observe from Eq. (9.3-3b) that, for any frequency, oo,

LH,(jw,) - L H b O ’ o o ) > 0 (9.3 -4)

Thus the phase shift of system B is always less than that of system A. Again, note
that system B is obtained from system A simply by moving its zero in the right half
of the s plane at s = 8 to its mirror image in the left half of the s plane at s = -B.
Now consider a causal system with two poles and two zeros whose s-plane
diagram is shown in Fig. 9.3-2a. From the diagram, the system function is

(9.3-5)

The magnitude of the real constant A is the gain constant, and the angle of the real
constant A (which is 0 if A > 0 or x if A < 0) is the phase constant. The RAC is
CJ > -a2 because the system is causal. The system is seen to be stable because the o
axis is contained within the RAC. Observe that the system function can be deter-
mined within the constant, A , from its s-plane diagram. The gain and phase shift of
this system can be determined from our previous work by noting that it can be
expressed as

where

+82
H,(s) = - and H2(s)= s- (9.3-6b)
s + a,
+

s + a2

Thus the gain of the given system is equal to IA I times the product of the gains of the
two systems given in Eq. (9.3-6b), and the phase shift of the given system is equal to
LA plus the sum of the phase shifts of the two defined systems. Note, too, that given
9.3 MINIMUM-PHASE SYSTEM FUNCTIONS 281

-P1
,. ."
-1 -2
-
a
-02
- "I I
a

Fig. 9.3-2a

-
" " I
A
"I I "
-p1 -a1 -2 I P2
Q

Fig. 9.3-2b

Fig. 9.3-2c

* A "
-1 -2 P2 K Q

system can be synthesized as the tandem connection of an ideal amplifier with a


gain = A , the system H,(s), and the system H2(s).
Now, from our previous discussion of a system with only one pole and one zero,
we observe that the gain of the given system is unchanged if a zero is moved to its
mirror image in the right half of the s plane. Consequently, the systems with the s-
plane diagrams shown in Figs. 9.3-2b, 9.3-2c, and 9.3-2d all have identical gain
curves. Their phase-shift curves, however, are different. In our analysis of systems A
and B above, we observed that moving a zero from the left half of the s plane at
s = -B to its mirror image at s = p increases the phase shift of the resulting system
at all frequencies. Consequently, the system with the largest phase shift at all
frequencies is that with all its zeros in the right half of the s plane as shown in
Fig. 9.3-2d. Also, the system with the smallest phase shift at all frequencies is that
with all its zeros in the left half of the s plane as shown in Fig. 9.3-2a. From this, we
note that of all systems with a given gain curve, the one with the smallest phase shift
at all frequencies is the one with all its zeros in the left half of the s plane. Conse-
quently, a causal and stable system is called a minimum-phase system if its system
function is a rational function with all zeros in the left harfof the s plane. Also, its
system function is called a minimum-phase function. That is, a minimum-phase
function is a proper rational function in which all poles and zeros are in the left
half of the s plane.
In our discussion of passive systems in Section 8.4, we saw that all the poles and
zeros of the impedance and admittance functions of passive systems lie in the left
half of the s plane so that the impedance or admittance function of any passive
282 S-PLANE VIEW OF GAIN AND PHASE SHIFT

lumped parameter system is a minimum-phase function. Furthermore, if H(s) is the


system function of a given system, then 1/H(s) is the system function of the inverse
of the given system. If H ( s ) contains a zero in the right half of the s plane, then
l/H(s) contains a pole in the right half of the s plane so that, as we discussed in
Section 8.2, the system inverse could not be both causal and stable. For the system
inverse to be both causal and stable, all of the poles of its system function, l/H(s),
must be in the left half of the s plane, which requires that the zeros of H(s) must be
in the left half of the s plane. Thus we conclude that a causal and stable inverse of a
causal and stable lumped parameter system exists if and only if its system function,
H(s), is a minimum-phasefunction. These are some of the reasons for the impor-
tance of minimum-phase functions in system theory.
One further observation of importance is that any rational system function can be
expressed as the product of a minimum-phase function and an all-pass function
discussed in Section 9.2C. For example, the system function with the s-plane
diagram shown in Fig. 9.3-2b can be expressed as

where Hc(s)is the minimum-phase system function given by Eq. (9.3-5) and Ho(s)is
the all-pass system function

s-82,
Ho(s) = - o> -p2 (9.3-7b)
s+82

We note from this result that an LTI system with a nonminimum-phase system
function can be expressed as the tandem connection of a minimum-phase system
and an all-pass system. From Eq. (9.2-12), the gain of the all-pass system is one at
all frequencies. Thus, from Eq. (9.3-7a), the system gain is

because IHo(jo)l = 1. We thus observe that the all-pass system does not affect the
gain of the tandem connection. However, the all-pass system does affect the system
phase shift of the tandem connection. The phase shift of the tandem connection is
equal to the sum of the phase shift of the minimum-phase system and the phase shift
of the all-pass system

in which the phase shift of the all-pass system above, LHo(jw), is from
Eq. (9.2-13b).

(9.3-8~)
9.3 MINIMUM-PHASE SYSTEM FUNCTIONS 283

This observation is of major importance for system synthesis. In the design of an


LTI system with some desired gain and phase-shift curve over a given frequency
band, it is difficult to determine the pole and zero locations required to simulta-
neously obtain both the desired gain curve and the desired phase-shift curve. An
easier procedure is to first design a minimum-phase system with the desired gain
curve over the given frequency band without any concern about the resulting phase
shift. Then an all-pass system is designed with a phase shift over the given frequency
band equal to the difference between the desired phase shift and the phase shift of
the minimum-phase system that was designed to have the desired gain curve over the
given frequency band. The system function of the desired system is then the product
of the two system functions which is constructed. Note that the minimum-phase
system and the all-pass system are not constructed separately and then connected in
tandem because, if there is a pole-zero cancellation as in Eq. (9.3-7a), the cancelled
pole and zero are not present in the system function of the desired system, and so its
realization is simpler.
Because the gain of an all-pass system is one at all frequencies, we may wonder
about the effect of an all-pass system on its input, x ( t ) . The phase, but not the
amplitude, of the sinusoidal components of x ( t ) are affected by the all-pass
system. What is the effect of the phase shift on the input waveform, x(t)? We
know from our results in Sections 1.4 and 5.5 that if all-pass phase shift were

$o(o) = - o r 0 (9.3-9)

then its output would be y (t ) = x(t - to),so that the response of the all-pass system
would be its input delayed by to seconds without distortion. In Eq. (9.3-9), the graph
of &(w) versus w is a straight line passing through the origin with a slope equal to

(9.3- 10)

so that the slope of the phase-shift curve is equal to the time shift. Note that a
negative slope is a delay and a positive slope is an advance of x(t). However, it
was shown near the end of Section 8.3 that an ideal delay system cannot be realized
by a lumped parameter system, so that the phase shift of an all-pass system with a
finite number of poles and zeros can, at best, only approximate Eq. (9.3-9) over some
frequency range.
Observe from our analysis in Section 9.2C that the phase shift of a lumped
parameter all-pass system decreases with frequency, so that the derivative of the
all-pass system phase shift with respect to o is negative. Thus we would expect the
all-pass output, y ( t ) , to be a delay of some distorted version of its input, x(t).
Therefore we expect the energy of x ( t ) to be delayed. To study this, we define the
partial energy of the system output, y(t), to be

(9.3-1 1)
284 S-PLANE VIEW OF GAIN AND PHASE SHIFT

The total energy of a waveform was discussed in Section 5.9, where the total energy
of a waveform, y(t), was defined to be

(9.3- 12 )

Thus the partial energy, Ey(T),is seen to be the energy ofy(t) up to the time t = T .
Because the gain of any all-pass system is one, the energy density spectrum of its
output, IY(jo)I2,is equal to that of its input, IX(jo)12,so that E, = Ey. That is, the
total energy of x(t), E,., is equal to E,,, the total energy of y(t).
The effect of the position of a zero of H(s) on the output of an LTI system is
derived and analysed in Appendix B. One result shown there is that, for any value of
T , the partial energy of the output of an all-pass system, Ey(T),is less than the partial
energy of its input, E,.(T), so that we indeed can say that the output of an all-pass
system is a distorted version of a delay of its input.
To illustrate this result, let the input of an all-pass system be

x ( t ) = e-"'u(t), a >o (9.3- 13a)

for which

1
X(s)= ~ CJ > -a (9.3-13b)
s+a'

Let the system function of the all-pass system be

s-b
H(s)=- C T > - ~ ,b > O (9.3- 14)
s+b'

The Laplace transform of the system output thus is

s-b
Y(s) = H(s)X(s)= CT > ma(-a, -b) (9.3-15a)
(s + a)(s + b ) '
Using the partial fraction expansion discussed in Section 7.3, we obtain for a # b

1
y(t) = - [ ( a
a-b
+ b)e-"' - 2be-hf]u(t) (9.3- 15b)

Figure 9.3-3 is a graph of x(t) and y(t) for a = 2 and b = 3.


We now can determine the difference between the partial energy of x(t) and y(t).
The partial energy of x(t) is

(9.3- 16 )
9.3 MINIMUM-PHASE SYSTEM FUNCTIONS 285

t
Fig. 9.3-3 Graph of x ( t ) and y(t) for a = 2 and b = 3.

and the partial energy of the output is

(9.3-17)

Thus the difference of the partial energies is

Thus we observe for our example that E J T ) > Ey(T)because the difference in Eq.
(9.3-18) is always positive, so that the energy of x ( t ) has indeed been delayed. Note
that the difference in the partial energies goes to zero as T --+ 00. The reason is that
as T + 00, the partial energy becomes equal to the total energy and, as discussed
following Eq. (9.3-12), that the total energy of x(t) and y(t) are equal.
In summary, we have shown that any stable and causal LTI system can be
expressed as a minimum-phase system connected in tandem with an all-pass
system. Even though the gain of the all-pass system is one at all frequencies, its
output is a distorted version of its input due to the all-pass system phase shift. The
total energy of the all-pass system output waveform is the same as that of its input
because the system gain is one at all frequencies. However, the effect of the phase
shift is to delay the partial energy of its input waveform.
286 S-PLANE VIEW OF GAIN AND PHASE SHIFT

9.4 BANDPASS SYSTEM FUNCTIONS

For all the cases we analyzed in previous sections, the system h c t i o n poles and
zeros are located on the 0 axis. In this section, we extend our analysis to system
functions that contain poles and/or zeros off the 0 axis. To begin, consider the stable
LTI system with the unit-impulse response

h(t) = Ae-'' sin(w,t)u(t), a >0 (9.4-1)

The system fimction is the Laplace transform of h(t), which is, from pair 4b of Table
7.4-1,

1
H(s) = A > -ci (9.4-2)
(s + a)2 + 0;'
0

To simplify our analysis of this system's gain and phase shift, we shall make some
approximations that require w , >> ci. As we perform our analysis, we shall determine
exactly how much larger w , must be relative to M in order that each approximation
made is valid.
Approximations are important in practical analysis. The are often made in analyz-
ing problems in which the exact results are complicated and not easy to visualize.
The procedure often used is to make all approximations needed to obtain a reason-
able analysis of the problem. After the analysis is complete, all the approximations
made are analyzed to determine the conditions required for all the approximations to
be valid. Computers are excellent for obtaining accurate numerical results. However,
an understanding of the general effects of various parameters, which is of basic
importance in design, is readily obtained from approximate theoretical analysis as
illustrated in this section. This is why engineering has often been referred to as a
science of intelligent approximations.
The s-plane diagram of the system function, H(s), is shown in Fig. 9.4-1. As
shown, there is a pole at s = -a + j w , , a pole at s = -a - j w , , and no zeros. The
system gain at the frequency coo is equal to

(9.4-3)

in which, as shown on Fig. 9.4-1, lp, and lp2 are the distances from the upper and
lower poles, respectively, to the point wo on the w axis. First, the distance lp2 is seen
to be the length of the hypotenuse of a right-angle triangle with a horizontal leg of
+
length ci and a vertical leg of length w , coo. Because a << w , , the vertical leg is
much longer than the horizontal leg so that the hypotenuse length is approximately
+
equal to the length of the vertical leg. Thus our approximation is lp2 wl wo for
all positive frequencies, wo. The worst case for this approximation is for wo = 0, in
which case we require lP2= d m x w , for our approximation to be acceptable.
9.4 BANDPASS SYSTEM FUNCTIONS 287

I"
1

T-"

Fig. 9.4-1

The approximation error is less than 5% if 0,1 3.13~ With


. this approximation,
we have for Eq. (9.4-3)

(9.4-4)

Now observe that lp, is the length of the hypotenuse of a right-angle triangle with
a horizontal leg of length a and a vertial leg of length lo1- ooI. To approximate lpl,
we first consider small values of the frequency coo for which the vertical leg is much
longer than the horizontal leg, so that the hypotenuse length is approximately equal
to the length of the vertical leg. Thus our approximation for small values of coo is
lp, x o1- wo. Similar to our approximation for Ip2, this approximation error is less
than 5% if ( w , - oo)2 3.13a or, equivalently, if ooI (0, - 3.13~).With this

approximation, the gain for small values of coo from Eq. (9.4-4) is

This approximation is good for frequencies wo p (a1- 3.13~).


Now, for frequencies within a few times a about a , ,the error of the approxima-
tion lp, M Jw, -coo( is larger than 5%. Thus we consider the frequency range
o1- 3a 5 coo 5 0,+ 3a. Our analysis in this range is simplified if we can consider
Ip2 a constant. Now lp2 x 2 0 , at coo = 0,. The error of this approximation is no
+
greater than 5% in the range 0,- 3a p wo 5 o1 3a for o1 30a. With this
approximation, our approximate expression for the gain in this frequency range is,
from Eq. (9.4-3),

(9.4-6)
288 S-PLANE VIEW OF GAIN AND PHASE SHIFT

Note that the system gain is inversely proportional to lp, in this frequency range.
From this expression, we observe that the gain curve has a maximum at wo = w 1
where ZpI = a, its smallest value. Thus an approximate expression for the maximum
gain is IAl/2wla.
+
For wo 2 w1 3.13a, we can approximate lp, with an error less than 5% as
lp, x wo - wl. Thus, from Eq. (9.4-4), our approximation in this range is

A good approximation of the exact gain curve shown in Fig. 9.4-2 is obtained with
the approximations (9.4-5), (9.4-6), and (9.4-7) in their proper frequency ranges. For
all the approximations made to be valid with an error no greater than 5%, note that
we require w , > 30a. However, note from the figure that the approximations are
very good for w1 = 151%.
The gain curve is seen to be that of a bandpass filter with a maximum gain of
JA1(1/2awl)at wo = wl. The half-power frequencies are the frequencies at which
the gain is 1/2 of its maximum. Similar to our discussion in Section 9.1, we have
from Eq. (9.4-6) that the half-power frequencies are those for which Zpl = &a.
From Fig. 9.4-1, Zpl = &a at oo= w1 f a . Thus the half-power bandwidth is
2a. Although our approximations require w1 > 30a, we can observe from Fig.
9.4-2 that this is a very good approximation even for w1 as low as 15a.
An important parameter characterizing a bandpass filter is its quality factor, Q,
which, in filter theory, is defined as

Bandpass region center frequency


(9.4-8)
= Bandwidth of the bandpass region

0.9 -
0.0 -
0.7 -
0.6 -
0.5 -
0.4 -
0.3 -
0.2 - 0 = 7.5
0.1 - Q=15
1

1 W
Fig. 9.4-2 Exact gain curves for Q = - 2 = 15 and 7.5.
2 u
9.4 BANDPASS SYSTEM FUNCTIONS 289

Note that 1/ Q is the fraction the bandwidth is of the bandpass center frequency. Thus
the Q of our bandpass filter is

Q = -W l
(9.4-9)
2ci

Note that our approximations about the passband region require o,2 30a or,
equivalently, a Q greater or equal to 15 for which the bandwidth, 2ci, is 6.7% of
the center frequency, coo. The Q of bandpass filters often are greater than this. In fact,
the Q of microwave filters often exceed lo4 and sometimes exceed lo6. To realate
the bandpass parameters to the s plane, observe that, for our approximations, the
center frequency is the distance a pole is from the 0 axis and the bandwidth is equal
to twice the distance a pole is from the o axis. In the time domain, we note from the
expression of the unit-impulse response, Eq. (9.4-l), that the center frequency is
equal to the frequency of the sinusoid and the bandwidth is equal to twice the
exponential decay rate.
From Fig. 9.4-1, the phase shift, 0, of the bandpass filter is seen to be

(9.4- 10)

The phase shift is zero at oo= 0 and, as coo + co,decreases monotonically to


-180". For o,2 30ci, we observe that tan-'[(w, +
oo)/ci] 2 tan-'(30) = 88" so
that a good approximation for the phase shift if the Q is greater than 15 is

0 = tan-' ( y )-mo
WI -
90" (9.4-1 I )

with an error less than 2". The phase-shift curve is shown in Fig. 9.4-3.
The gain and the phase shift in the filter bandpass region is the frequency range of
+
major concern. In the bandpass region, w , - 3a 5 ioo 5 o1 3a, the gain is given
by Eq. (9.4-6) and the phase shift is given by Eq. (9.4-11). Note that the pole at
s = -ci -j w , only contributed a constant to these equations; the shape of the gain

0
-15 -
Phase shift, 8 -a-
(degrees) 45 -
80-
-75 -
-4) -
-1s -
-1m -
-135 -
-150 -
-1s -
-180 A a0

Fig. 9.4-3 Exact phase-shift curves for Q = 15 and 7.5.


290 S-PLANE VIEW OF GAIN AND PHASE SHIFT

and phase-shift curves is determined solely by the pole at s = -a + j q . This is a


consequence of the fact that the distance and the angle from the pole at
s = -a -jolis approximately constant for ooin the bandpass region.
Let us extend this important observation to more general cases. Consider the s-
plane diagram of a system function in which there are poles and zeros scattered about
the s plane. We consider the case in which the poles and zeros that are not within the
circle of radius r as shown in Fig. 9.4-4 are at a distance much larger than r from the
circle. That is, the case we consider is one in which there are poles and zeros within
the circle and all other poles and zeros are at a distance d >> Y from the circle. Now
consider the system gain and phase shift for frequencies, o1- r < oo< o1 r. +
In this frequency range, the lengths and angles of the vectors from the poles and
zeros outside the circle to ooare approximately constant. Thus their contribution to
the system gain and phase shift is approximately a constant for frequencies
+
o1- r < oo< o1 r. Thus the shape of the gain and phase-shift curve in the
vicinity of o1is determined solely by the poles and zeros within the circle. Conse-
quently, the poles and zeros within the circle are called dominant poles and zeros for
+
the frequency range o1- r < wo < o1 r. In our previous example, the pole at
s = -a +jwl is the dominant pole for the bandpass region.
This observation is the basis of low-pass to bandpass transformations. As an
illustration, consider a system with the s-plane diagram shown in Fig. 9.4-5 in
which o1>> a. From our discussion above, the shape of the gain and phase-shift
curves in the vicinity of o1is determined solely by the pole at s = -a +jo,and the
zero at s = -1-3 +jol. Now note that the length and the angle of the vector from this
+
pole and the vector from this zero to oo= o1 6 are exactly the same as the length
and angle of the vectors from the pole and zero in Fig. 9.2-la to oo= 6. Because
ol>> a, we conclude that the shape of the gain and the phase-shift curve of the
system of Fig. 9.4-5 for frequencies about w1 are the same as those of the system of
Fig. 9.2-la for frequencies about zero. Because the system of Fig. 9.2-la is a high-
pass filter, we immediately conclude that the system of Fig. 9.4-5 is a bandreject
filter because its gain increases as loo- w1I increases and the shape of the gain
curve about o1is the same as that shown in Fig. 9.2-lb. For the special case in
which 1-3 = 0, the two zeros are on the o axis at o = fo,so that the system gain is
zero at o = wl. This is called a notch filter, in which the bandwidth of the notch is
determined by the value of a, the distance the poles are from the w axis.

Fig. 9.4-4
9.5 ALGEBRAIC DETERMINATION OF THE SYSTEM FUNCTION 291

x 0-01

Fig. 9.4-5

The importance of the discussion above is that one can design a filter for a desired
shape of the gain and phase-shift curves centered about a frequency w = o1by first
designing a filter with the desired shape of the gain and phase-shift curves about
o = 0. A filter with approximately the same shape of the gain and phase-shift curves
centered about o,then can be obtained by moving the poles and zeros vertically up
an amount w I and also down by the same amount (for the required conjugate pairs).
This basic concept is often used in filter design. The low-pass Buttenvorth filter will
be analyzed in the next section. We then will use the low-pass to band-pass trans-
formation concepts discussed above to design a bandpass Buttenvorth filter.

9.5 ALGEBRAIC DETERMINATION OF THE SYSTEM FUNCTION

The geometric analysis technique discussed in the previous sections of this chapter
lends great insight into the relation between the gain and phase shift of a stable LTI
system and the locations of its system function poles and zeros. With this insight,
one often can determine an approximate s-plane pole-zero pattern of the system
function required for a desired system gain and phase shift. A computer then can be
used to determine the required pole and zero locations with greater precision.
Another technique used to determine the required system function is an algebraic
one. We’ll discuss this technique because of the insight it lends to s-plane analysis.
Rather than beginning with a general discussion of the technique, we begin by
discussing its use in the design of the low-pass Buttenvorth filter.

9.5A The Low-pass Butterworth Filter


A low-pass filter for which the gain is

(9.5-1)
292 S-PLANE VIEW OF GAIN AND PHASE SHIFT

is called an nth-order Butterworth filter. Figure 9.5-1 is a graph of the gain for some
values of n.
Observe that the gain curve is that of a low-pass filter for which, for any value of
n, the half-power frequency is o,and which approaches the ideal rectangular gain
curve monotonically for increasing values of n. In accordance with our discussion of
the Paley-Wiener criterion in Section 5.1 lB, the ideal rectangular gain curve can not
be achieved by a physical system. However, as we discussed, the gain curve of a
physical system can closely approximate the ideal rectangular curve. The Butter-
worth filter is one such approximation.
To design a filter with the Butterworth gain curve, we need to determine the
required system function, H,(s). For this, we first eliminate the square root in Eq.
(9.5-1) by squaring the gain expression

(9.5-2)

Note that because h(t) is a real function oft, we have with the use of our result, Eq.
(5.5-5b),

and because H,(jo) = H,,(s)I~=~~


we can express the equation above as

0.7 -
0.6 -
0.6 -
0.4 -
0.3 -
0.2 -
0.1 -

Fig. 9.5-1 Butterworth gain curve for some values of n.


9.5 ALGEBRAIC DETERMINATION OF THE SYSTEM FUNCTION 293

Thus, by replacing o in Eq. (9.5-2) with s l j , we obtain

To determine the RAC, we have from our discussion of the interval property of the
RAC in Section 6.2 that the RAC of a finctionf(t) is an interval between the poles
of its transform, F(s). Now, Eq. (9.5-3b) requires the o axis to be included in the
RAC of H(s)H(-s). Thus the RAC for Eq. (9.5-4) must be the interval oU < o < o b
+
in which s = ou j w is the vertical line on which the first pole to the leR of the w
axis lies and s = ob +jo is the vertical line on which the first pole to the right of the
o axis lies.
Now, the expression in Eq. (9.5-4) has no zeros, but it does have poles at those
values of s = sk for which
2n
=o (9.5-5a)

With the use of Appendix A, the solution of this 2nth-degree polynomial equation
for the 2n values of s which satisfy it is obtained as follows:
2n
&1+2k)n, k = 0, f l , f 2 , f 3 , . . . (9.5-5b)

where

(9.5-5c)

so that the poles of Hn(s)Hn(-s)


in Eq. (9.5-4) are at

From this solution we have lSkl = (0,and for k = 0, f l , f 2 , f 3 , . . .

/!sk = (2
1 1+2k
-+- 2n )TC rad = (z + y)
1 1+2k
180" (9.5-7)

Thus the solutions lie on a circle with its center at the origin and radius equal to o,
in the s plane. The pole positions for n = 2 and 3 are shown in Fig. 9.5-2.
Observe from Eq. (9.5-7) that the angle of sk is increased by an amount of 271
radians if k is increased by an amount of 2n. Thus, for any value of ko, the value ofsk
is the same for k = ko and for k = ko + 2n. Consequently, there are exactly 2n
294 S-PLANE VIEW OF GAIN AND PHASE SHIFT

s-plane roots for n=2 s-plane roots for n=3


Fig. 9.5-2

distinct roots of our 2nth-degree polynomial equation, Eq. (9.5-5a), in accordance


with the fundamental theorem of algebra discussed in Appendix A.
We now know the pole locations of H,(s)H,(-s) in Eq. (9.5-4). But which poles
belong to H,(s)? Because the Butterworth filter is to be a causal and stable system,
we must choose all the poles of H,(s) to be in the left half of the s plane in
accordance with our discussion in Section 8.2. Now note that if, for k = k , , a
+
solution of Eq. (9.5-7) is o,L 4, then for k = k, n, we obtain the solution
+
o , L (4 z). This means that poles of H,(s)H,(-s) occur in pairs that are 180"
apart as is seen in Fig. 9.5-2. Thus there is a right-half-plane pole for every left-
half-plane pole. Consequently, we choose the poles of H,(s) to be the left-half-plane
pole solutions of Eq. (9.5-5a); the right-half-plane poles then are those of H,(-s).
For example, the general expressions for the system function of a fist-order and a
second-order Butterworth filter are

(9.5-8a)

and

(9.5-8b)

The numerator constant is chosen in order that H,(O) = A .

Example To illustrate the use of the results we've obtained, we design a low-pass
Butterworth filter with the specfication that the dc gain be 10 and not drop below 9
before a frequency of 12 kHz.In the reject band, it is specified that the gain be below
1.5 above a frequency of 3OkHz. Note from Fig. 9.5-1 that we can satisfy or exceed
these specifications by designing the filter such that

IH,(j2400071)1 = 9 (9.5-9b)
9.5 ALGEBRAIC DETERMINATION OF THE SYSTEM FUNCTION 295

and

IHn(j6OO0On)l = 1.5 (9.5-9c)

We have three unknowns A , o,,and n in Eq. (9.5-1) that can be specified to satisfy
the three equations of Eq. (9.5-9). First, it is clear that Eq. (9.5-9a) is satisfied with
A = 10. We now need to determine the values of o, and n in Eq. (9.5-1) such that
the other two equations of (9.5-9) are satisfied. For this, we have from Eq. (9.5-2)
that

(9.5- 1Oa)

The right-hand side of this equation is a function only of the system gain. To
simplify our manipulations, call this function of the gain T(w).

(9.5-lob)

Then the equation we must satisfy is

(9.5- 1OC)

In Eqs. (9.5-9) the gain is specified at two frequencies, o1and w2. Thus we can
eliminate o, from Eq. (9.5-1Oc) to obtain

(9.5- 1Od)

so that, by taking the logarithm of this equation and solving for n, we obtain

(9.5-1 la)

We then can use the calculated value of n to determine o,from Eq. (9.5-1Oc):

or
o,= ol/rl/2n(ol) W, = 02/rl j 2 n (a2) (9.5-1 lb)

The calculated value of n is used to determine 0,. However, an integer value of n


must be used for the order of the Buttenvorth filter. The order is chosen to be the
next integer above the calculated value of n if it is not an integer.
296 S-PLANE VIEW OF GAIN AND PHASE SHIFT

For our example, using A = 10 and the values from Eqs. (9.5-9) the calculated
value of n is from Eq. (9.5-1 la),

1 ln(=) 1-5.2214
- -~
it=- - = 2.8492 (9.5-12a)
In(%) 2 -0.9 163

so that the value of w, from Eq. (9.5-1 lb) is

24,00071 -
24,00071 24,00071
0,= -- = 30,95671 rad/s (9.5-12b)
(0.2346)- - 0.7753
(o.2346)0.1755

which is 15,478 Hz. We would obtain the required specifications with these values of
w, and n. However, the order of the filter, n, must be an integer. From Fig. 9.5-1, we
observe that the specification will be exceeded by increasing the value of n. Thus we
choose n = 3, which is the first integer greater than the calculated value. Therefore
designed filter is a third-order low-pass Butterworth with A = 10 and
w, = 15,478 Hz = 30,956nrad/s. We now determine the pole locations because
they are the solutions of Eq. (9.5-7) that lie in the left half of the s plane.

SI = w,L 180" = 0,(-1 +jO) = -30,95671 + j O (9.5- 13a)

s2 = w,L 120" = w,(-0.5 +j0.866) = (-15,478 +j26,808.7)~ (9.5-13b)

~3 i 120" = w,(-0.5 -j0.866) = (-15,478 -j26,808.7)~


=~ , - (9.5-13~)

and the system function of the desired third-order Butterworth filter is

By expanding the denominator of Eq. (9.5-14), the general expression of a third-


order Butterworth system function is

A
H3 (4 , CJ> -0.50, (9.5-15)
(:)3+2(;)2+2(;) +1
By partial fraction expansion of Eq. (9.5-14), the unit impulse response of a third-
order Butterworth filter can be shown to be
9.5 ALGEBRAIC DETERMINATION OF THE SYSTEM FUNCTION 297

Also, because H ( s ) = Y(s)/X(s),in which Y(s) is the Laplace transform of the


system output and X ( s ) is the Laplace transform of the system input, we have
from Eq. (9.5-15)

Using the time differentiation property (item 6 of Table 7.4-2), we obtain the inverse
Laplace transform of this equation, which is the differential equation relating the
system output, y(t), and system input, ~ ( t ) :

(9.5- 18)

The third-order Butterworth low-pass system can be synthesized by synthesizing a


system for which the input and output satisfy this differential equation.
A second synthesis method is to express the system function given in Eq. (9.5-14)
as

in which, with sI,s2, and s3 given by Eq. (9.5-13),

(T > -0, (9.5-19b)

and

(9.5-19~)

The tandem connection of these two systems is the desired third-order low-pass
Butterworth.
A third synthesis of this filter can be obtained by noting that H3(s)also can be
expressed as

in which

H,(S) = ~
Am, , (T>--o, (9.5-20b)
s - s,
298 S-PLANE VIEW OF GAIN AND PHASE SHIFT

and

(9.5-20~)

Using this decomposition of H3(s), the desired third-order low-pass Butterworth


filter can also be synthesized as the parallel connection of two systems. Observe
from this that generally there are several possible realizations of a given LTI system
which are obtained by different decompositions of the given system function, H(s).

9.5B The Bandpass Butterworth Filter


The low-pass filter now can be transformed into a bandpass filter as discussed in
Section 9.4. For example, let us transform our low-pass filter into a bandpass filter
with a center frequency offi = 455 kHz so that it can be used as an intermediate-
frequency (IF) amplifier of an Ah4 radio receiver. In accordance with our discussion
in Section 9.4, we move the low-pass filter poles and zeros parallel to the w axis up
by an amount O , = 214 = 9 1 0 krad/s
~ and down by the same amount for the
conjugate poles to obtain

where, from Eq. (9.5-13), we have

Although the shape of the bandpass filter about o = o1is the same as that of the
low-pass filter because o1>> o, (for our example, wI = 29.40,), we must deter-
mine A so that the gain of the bandpass filter at w1 is the same as the lowpass filter at
o = 0. It is the poles at s = sl, s3,and s5 that determine the shape of the gain curve
in the bandpass region. In this region, the distances from the poles at s = s2, s4, and
s 6 are approximately constant and equal to 20,. Thus, for the bandpass filter gain to
have the same magnitude at O , as the low-pass filter at zero frequency, we must
make A = l O ( 2 ~ , ) The
~ . factor of (20,)~cancels the approximately constant factor
of l/(20,)~ contributed by the three poles at s = s2, s4, and s6. With this choice of
A , the gain curve of the bandpass filter with the system function given by Eq. (9.5-
21) has the same shape and magnitude in its bandpass region as the low-pass filter in
its passband.
PROBLEMS 299

The technique for the design of a Butterworth filter can be generalized. Consider
again Eq. (9.5-2), which is the basic equation specifying the gain curve of the low-
pass Butterworth filter. The expression there is of the form

(9.5-22)

where PJo) is an nth-degree polynomial that is chosen for the filter to have the
desired gain curve. For the nth-order lowpass Butterworth filter, the polynomial is
P,(w) = w". Generally, for a low-pass filter in which the gain is approximately
constant in the passband and then decreases rapidly outside the passband and in
which the dc gain is equal to A , we require the polynomial, P,(w), to be equal to zero
at o = 0, be small for w in the passband, and then increase rapidly outside the
passband. The choice of P,(o)= on for the low-pass Butterworth filter is the
simplest polynomial that can be chosen. There are many others that can be
chosen. ORen, the name of the filter is the same as the name of the polynomial
used. For example, the nth-order low-pass Chebyshev filter is obtained by using the
Chebyshev polynomial, T,(o), and the nth-order low-pass elliptic filter is obtained
by using the Jacobean elliptic polynomial, U,(o). Each of these polynomials results
in a different approximation to the ideal low-pass filter gain curve. We shall not
discuss these specific filters because our objective in this section has been only to
introduce the algebraic technique for the determination of the system function. We
only used the Butterworth filter to illustrate the algebraic concepts and a design
technique using them. Texts specifically concerned with filter design discuss these
and other types of filters in detail.

PROBLEMS

9-1 Obtain the result illustrated by Fig. 9.1-lb algebraically instead of geome-
trically as in the text.

9-2 Figure 9.1-2b is the graph of the gain of a one pole low-pass filter. From Eq.
(9.1-6b), the exact expression for the gain is

Expand the expression in a power series and thus shown that, for small values
of o,the gain is approximately

so that, for small values of o,the shape of the gain curve is parabolic.
300 S-PLANE VIEW OF GAIN AND PHASE SHIFT

9-3 For small values of 8, the approximation tan 8 x 8 was used to obtain Eq.
(9.1-7e) and the approximation sin 8 x tan8 was used to obtain Eq. (9.1%).
For each of these approximations, determine the maximum value of 8 in
degrees for which the approximation error is less than 1%, 5%, and 10%.

9-4 Show that the maximum phase shift given by Eq. (9.2-80 occurs at the
frequency coo = @.

9-5 Obtain Eq. (9.2-13b) by showing that 8, - OP = I).

9-6 +
The system function of an all-pass system is Ha@)= (s - a ) / @ a) 0 > -a.
(a) Determine a differential equation relating to the system input, x(t), and
output, r@>.
@) Determine the system unit-step response.
(c) What is the asymptotic value of the unit-step response as t + 00.
Explain.

9-7 (a) Show that the system function H J s ) = (s - a)/(s +


a) B > -a is an all-
pass filter. Determine an expression for its phase shift.
(b) Show that the system function Hb(s) = [(s - a)* + w:])/[(s+ a)2 + o:],
0 > -K, also is an all-pass filter. Determine an expression for its phase
shift.

9-8 To eliminate 60-Hz power line interference in an amplifier, it is decided to


connect a notch filter in tandem with the amplifier. The specifications for this
application are:
1. The notch filter should have zero gain at a frequency of 60Hz.
2. The gain should be no greater than l/& in the band 59.5 <f < 60.5.
3. The gain should be approximately one and the phase-shift should be
approximately zero for frequencies several hertz from 60 Hz.
Determine the required notch filter system function.

9-9 The s-plane pole-zero diagram of a filter is shown in Fig. 9.4-5.


(a) Determine the system function of the system.
(b) Determine approximate gain and phase-shift expressions for the case in
which p = 0 and w , >> c1 so that the system is a notch filter and show
that the system gain and phase shift are those of the single-pole high-pass
filter discussed in Section 9.2 shifted by 0,.

9-10 (a) Determine the system function of a radio-frequency (rf) bandpass


amplifier with an s-plane diagram as shown in Fig. 9.4-1. The filter
specifications are as follows:
PROBLEMS 301

1. The center frequency should be 1 MHz.


2 . The filter bandwidth should be 50kHz.
3. The filter maximum gain should be 1000.
(b) Determine the Q of the filter.

9-11 The system function of an audio amplifier is


S
H(s) = A > -2071
(s + 20n)(s + 6n x IO4) ' CJ

where A > 0.
(a) Show that the system function can be expressed as H(s) = H,(s)H,(s) in
which H,(s) and Hb(s) are the system functions discussed in Section 9.1
so that the amplifier can be viewed as the tandem connection of those two
systems.
@) Use the result of part a to show that the amplifier gain is approximately A
in the audio frequency range and that the gain falls 3-dB at the band
edges. Determine the lower and the upper frequencies of the band edges.
(c) Use the result of part a to show that the amplifier phase shift is
approximately zero in the audio-frequency range. What is the phase
shift at the lower and the upper frequencies of the band edges determined
in part b.

9-12 A problem that occurs is the experimental determination of the Q of bandpass


filters for which the Q is high. The reason is that, in accordance with Eq. (9.4-
8), the Q is inversely proportional to the bandwidth. Thus, if the bandwidth is
small, a small error in the measurement of the bandwidth can lead to a large
error of the value of Q. To reduce this measurement error, a measurement
technique called the logarithmic decrement often is used. Let the impulse
response of the bandpass filter be as given by Eq. (9.4-1). Note that the
positive portion of the envelope of h(t) is g(t) = Ae-"u(t).
(a) Show that h(t) = g(t) at the times tk = - [?
0
1 7 1

1
+ 2nk].
h(t 1 qm-n)n -
(b) Thus show that 2= P I --ean.
h(tm) m-n
(c) Use the result of part b to show that Q = -
h(tJ
In -
h(ti7l)
The measurement technique using this result is to measure the ratio of the
value of two maxima of the impulse response at two instances separated by
(m - n) cycles of the sinusoid. The number of cycles of separation can be
obtained by counting the number of peaks of h(t). The Q is then 71 times the
number of cycles of separation divided by the logarithm of the ratio. This is
one technique used to determine the Q of a microwave cavity for which the Q
often is above 20,000.
302 S-PLANE VIEW OF GAIN AND PHASE SHIFT

9-13 It is desired to design an LTI filter with a maximum filter gain of 100 with
two bandpass regions: one with a center frequency of 1 MHz and a 3-dB
bandwidth of 50 kHz, the other with a center frequency of 2 MHz and also
with a 3-dB bandwidth of 50 kHz.
(a) Use the tandem connection of two systems of the form discussed in
Section 9.4 to determine an approximate system function of the desired
filter.
(b) The maximum gain in each bandpass region is not the same. How can the
system function determined in part a be modified so the maximum gain
in each bandpass region is equal?

9-14 The system functions of two stable LTI system are

1 S-U
HI(s)
=-, > -a and H2(s)=- > -a
s+a
(T
(s + a)2 ' (T

(a) Determine the unit-impulse response of each system.


(b) Show that the gain of both systems is the same by showing that the
second system can be viewed as the first system connected in tandem
with an all-pass filter.
(c) What is the difference in the phase shift of the two systems?

9-15 Consider the following three systems with the responses y,(t), n = 1 , 2 , or 3,
for the input x(t). For each system, a > 0 and the RAC is (T > --a.

s-a 1 s-a
1 . H,(s) = 2. 4 ( s ) = - 3. H3(s)=
~

s+a s+a
~

(s + al2
(a) Determine the unit-impulse response, h,(t), of each system.
(b) Determine the unit-step response, s,(t), of each system.
(c) Show that h3(t) = h , ( t ) * h2(t)and thus show that the third system can be
viewed as a low-pass filter connected in tandem with an all-pass system.
(d) Thus show that y2(t)- y 3 ( t ) = y2(t)* [2ae-"u(t)] so that the difference
of their outputs is the response of a low-pass filter with the input y2(t).

9-16 (a) Determine the general expression for the system function of a low-pass
fourth-order Butterworth filter.
@) Determine a differential equation relating the input, x(t), and output, y(t),
of the filter.

9-17 (a) Use Eq. (9.5-1) to determine co, and n such that the gain of the
Butterworth filter is greater than or equal to 0.9 at 4000Hz and less
than or equal to 0.1 at 6000 Hz.
PROBLEMS 303

(b) Sketch the s-plane diagram of the system fbnction and label the pole and
zero locations.

9-18 Use the design technique discussed in Section 9.5 to determine the system
h c t i o n of a causal and stable LTI system required for the square of the
system gain to be

2A2 2A2
IHo’W>l2 =

2 + 3 (g+(;- = [+1 p +
CHAPTER 10

INTERCONNECTION OF SYSTEMS

10.1 BASIC LTI SYSTEM INTERCONNECTIONS

There are three basic ways in which two LTI systems can be connected to form
another LTI system. The three ways are parallel, tandem, and feedback. All LTI
systems that are composed of the interconnection of a number of subsystems can be
analyzed in terms of these three basic connections. The theory of the parallel and
tandem connections has been covered in the previous chapters; also, a simple model
of feedback was discussed in Section 1.6. We’ll begin by summarizing those results
because we shall need them in subsequent sections.

10.1A The Parallel Connection


The parallel connection of two systems, A and B, is shown in Fig. 10.1-1. The input,
x(t), is applied to both systems A and B. The system output, y(t), is the sum of the
outputs of systems A and B. In accordance with our discussions in Section 3.5, if
both systems A and B are causal, then the parallel connection is a causal system
because, at any time, the output, y(t),does not depend on the future of the input, x(t).
Also, if both systems A and B are BIBO-stable, then, in accordance with our
discussion in Section 3.6, the parallel connection is a BIBO-stable system because,
for a bounded input, x ( t ) , the outputs of systems A and B are bounded. Note that
these two results are true irrespective of whether the systems A and B are linear or
nonlinear or whether systems A and B are time-invariant or time-varying. However,
if systems A and B are LTI systems with unit impulse responses h,(t) and hh(t),
305
306 INTERCONNECTION OF SYSTEMS
,.........................................................

x ();
t j
+ :

...........................................................

Fig. 10.1-1 Parallel connection of two systems.

respectively, then the parallel connection is an LTI system with the unit impulse
response

Thus the system function of the parallel connection is

with the RAC equal to the overlap of the RACs of systems A and B.

10.1B The Tandem Connection


The tandem connection of two systems, A and B, is shown in Fig. 10.1-2.
As shown, the output of system A is the input of system B. In accordance with
our discussion in Section 4.3, if both systems are causal, then the tandem connection
is a causal system. Also, if both systems are BIBO-stable, then the tandem connec-
tion is a BIBO-stable system. Note that these two results are true irrespective of
whether systems A and B are linear or nonlinear or whether systems A and B are
time-invariant or time-varying. However, if systems A and B are LTI systems with
unit impulse responses hu(t) and hb(t), respectively, then, as shown in Section 3.1,
the tandem connection is an LTI system with the unit impulse response

where the asterisk means the convolution of the two functions. Thus the system
function of the tandem connection is

with the RAC equal to the overlap of the RACs of systems A and B.

..............................................

Fig. 10.1-2 Tandem connection of two systems.


10.1 BASIC LTI SYSTEM INTERCONNECTIONS 307

10.1C The Feedback Connection


The third basic type of interconnection is the feedback connection. It is the connec-
tion of two systems, A and B, as shown in Fig. 10.1-3. As shown, the feedback
system input is x ( t ) and the feedback system output is y(t). Note that the input of
system A is e(t) = x ( t ) - z(t), where z(t) is the output of system B. The input of
system B is y(t), so that z(t) is obtained by feeding back an operation on the system
output, y(t). Thus the path from y(t) back to x ( t ) is called the feedback path, and the
path from e(?)to y(t) is called the feedfonvard path. The simple model of echoing
discussed in Section 1.6 is a special case of the feedback connection in which
systems A and B are composed of an ideal amplifier in tandem with a delay. In
that section, we saw that the time-domain techniques used there did not allow us to
obtain any deep understanding of feedback system properties because we were only
able to determine the response of that simple feedback system for rather simple
inputs. However, with the LTI theory we have developed, we now can analyze the
general case in which systems A and B are more complex LTI systems and under-
stand the effect of feedback on the feedback system.
It should be noted that many physical systems, not just echoing, can be modeled
as a feedback system. For example, the simple act of a person picking up a book can
be modeled as a feedback system. In this model, the input x ( t ) is the book position
(which is a constant if the book isn’t moving) and the feedback signal, z(t), is the
position of the person’s hand. The signal, e(t),is then the distance between the book
position and the hand position. In order to pick up the book, the person must cause
e(t) to be equal to zero. This is accomplished by the processing of e(t) by the eye,
which then sends signals to the brain, which then sends signals to the arm muscles,
which then causes the hand position, z(t),to change. If the model output is chosen to
be the hand position, where z(t) = y(t), then system A in Figure 10.1-3 is the system
composed of the eye, brain, muscle, and arm dynamics and system B is simply an
ideal amplifier with gain equal to one for which the output equals the input. The
difference signal is often called the error signal and denoted by e(t) because the
object in many feedback systems, as in this example, is to cause e(t) to be equal to
zero.
As another example, consider the thermostatic control of the temperature at your
chair in your living room on a cold day. In this model, the input x ( t ) is the desired
temperature at the thermostat which you set and the signal, z(t), is the actual
temperature at the thermostat. The signal, e(t), is then the difference between the
desired and the actual temperature at the thermostat. Nothing happens if e(t) < 0
because then the actual temperature at the thermostat is greater than the desired
temperature. However, if e(t) > 0 so that x ( t ) > z(t), then a switch is closed in the

z(t) I B 1)I
Fig. 10.1-3 Feedback connection of two systems.
308 INTERCONNECTION OF SYSTEMS

thermostat, which activates the room heater and thereby causes the room temperature
to rise. This temperature rise differs from point to point in the room because of the
heat flow in the room. Because we are interested in the temperature at your chair, we
choose the output, y(t), to be the temperature at your chair. For this model then,
system A in Fig. 10.1-3 is the system model of the thermostat, heater, the room heat
flow from the room heater to your chair, and the resulting temperature rise at your
chair. System B is then a model of the relation between the temperature at your chair
and that at the thermostat.
Before analyzing the feedback system, let us note the following general state-
ments that can be made about a feedback system:

1. If systems A and B are causal systems, then the feedback system is a causal
system. This result is easily seen because if system A is causal, then its output, y(t),
does not depend on the future of its input, e(t). If system B also is causal, then its
output does not depend on the future of its input, y(t). Thus, no operation within the
feedback system depends on the hture of its input. Consequently, the feedback
system output, y(t), cannot depend on the future of its input, x(t) so that the feedback
system is causal. Note that this result is true whether the systems A and B are linear
or nonlinear and also whether they are time-invariant or time-varying.
2. If systems A and B are time-invariant, then the feedback system is time-
invariant because all feedback operations are then time-invariant operations. Note
that this result is true whether the systems A and B are linear or nonlinear.
3. If systems A and B are linear, then the feedback system is linear because all the
feedback operations are then linear operations and we have shown that the parallel
and tandem connection of linear operators is a linear operator. Note that this result is
true whether the systems A and B are time-invariant or time-varying.

We thus conclude from the above statements that if systems A and B are causal
LTI systems, then the feedback system is a causal LTI system. However, if systems
A and B are stable systems, we cannot conclude that the feedback is necessarily
stable. This observation can be noted from the analysis of the simple feedback model
in Section 1.6.
Because we are concerned with the analysis of physical LTI systems in this text,
we shall analyze the case for which systems A and B are causal LTI systems in the
next section.

10.2 ANALYSIS OF THE FEEDBACK SYSTEM

We shall analyze the feedback connection in which A and B in Fig. 10.2-1 are causal
LTI systems with the unit-impulse responses h,(t) and hb(t), respectively. As shown,
the feedback system input is x ( t ) and the feedback system output is y(t). The input of
system A is e(t) = x ( t ) - z(t), where z(t) is the output of system B. The input of
system B is y(t), so that z(t) is obtained by feeding back an LTI operation on the
10.2 ANALYSIS OF THE FEEDBACK SYSTEM 309

%++T{T-J2 Z t

................................................... ..............

Fig. 10.2-1 Feedback connection of two systems.

system output, y(t). The path that contains system A is called the feedforward path,
and the path that contains system B is called the feedback path. The loop formed by
the feedforward and feedback paths is called the feedback loop.
Because systems A and B are considered to be LTI systems, we have from our
discussion in the previous section that the feedback system is an LTI system. Thus
the feedback system can be characterized by a unit impulse response, h(t), so that its
output, y(t), can expressed as

where the asterisk indicates the convolution. From Fig. 10.2-1, the system relations
are

(10.2-2)

To determine h ( t ) from these equations, we would have to obtain from them an


explicit expression for y(t) as equal to the convolution of some function with x(t).
That fimction would then be h(t). Unfortunately, such an expression cannot be
obtained from Eq. (10.2-2) because y(t) appears implicitly in the equations.
However, the Laplace transform of these are equations are algebraic equations
that can be solved algebraically to obtain an expression for the feedback system
function, H(s). Thus we will use Laplace transforms to determine an algebraic
expression for H ( s ) and also its RAC.
First, the RAC of H ( s ) is easily specified. Because systems A and B are consid-
ered to be causal, we have from the previous section that the feedback system must
be causal, so that h(t) = 0 for t < 0. Thus, from our discussion in Section 8.2, the
RAC must be to the right of all the poles of H(s).
To determine the expression for H(s), we take the bilateral Laplace transform of
Eqs. (10.2-2):

(10.2-3)
310 INTERCONNECTION OF SYSTEMS

Observe that Eqs. (10.2-2) have been transformed into algebraic ones that are easily
solved for Y(s) in terms of X(s). The solution is

(10.2-4)

Now, the Laplace transform of Eq. (10.2-1) is

(10.2-5)

Thus, by comparing Eqs. (10.2-4) and (10.2-5), we note that

(10.2-6)

and the RAC, as we concluded above, is to the right of all the poles of H ( s ) .
A way to remember the expression for H ( s ) is to note that the numerator is the
feedforward system function. The denominator is one plus the product of the feed-
forward and feedback system functions. The plus in the denominator is due to the
+
minus sign in the expression for e(t).If instead, e(t) = x(t) z(t), then the sign in the
denominator of Eq. (10.2-6) would be minus. Thus the sign in the denominator is
simply opposite the sign in the expression for e(t). The following examples illustrate
some applications and interpretations of this result.

Example 1 As our first example, we determine the system function, H(s), of a


feedback system with h,(t) = Ae-%(t) in which M > 0 and with hb(t) = Bs(t). The
corresponding system functions are

0 > -M (10.2-7a)

and

From our discussion in Chapter 9, we observe that system A in our example is a


simple low-pass filter and that system B is an ideal amplifier with a gain equal to B.
Substituting these expressions into Eq. (10.2-6), we obtain

A
10.2 ANALYSIS OF THE FEEDBACK SYSTEM 311

In accordance with our discussion in Section 10.1, the feedback system is causal.
Thus the RAC must be to the right of the pole at s = -(a + AB). The w axis must lie
in the RAC for the system to be BIBO-stable. Thus, the feedback system is stable
only if (a + A B ) > 0. This inequality requires
a
B> -- (10.2-9)
A

Thus the feedback system is stable only if the amplifier gain, B, in the feedback path
satisfies Eq. (10.2-9), whereas the feedback system is not stable if the amplifier gain,
B, in the feedback path does not satisfy Eq. (10.2-9).
We also can obtain these results from time-domain considerations. Note from
Eq. (10.2-8) that the unit-impulse response of the feedback system is

If (a + A B ) > 0, the impulse response is a decaying exponential, so that the area


under Ih(t)l is finite, which we have shown to be a necessary and sufficient condition
+
for the system to be BIBO-stable. If (a AB) 5 0, the area under Ih(t)l is not finite,
so that the system is not BIBO-stable.

Example 2 As a second example, we determine H(s) for the case in which


h,(t) = A e P t u ( t )where a > 0 as in our previous example but with hb(t) = Be-b'u(t)
where /l > 0. The corresponding system functions are

0 > -a (10.2-1 la)

and

(10.2-1 lb)

Now both systems A and B in our example are simple low-pass filters. Substituting
these expressions into Eq. (10.2-6), we obtain

We note that there is one zero at s = -/l and two poles at the roots of the quadratic in
the denominator. In accordance with our discussion in Section 10.1, the feedback
system is causal, so that the RAC must be to the right of both poles. As we discussed
in Section 8.2, the o axis must lie in the RAC for the system to be BIBO-stable, so
that the feedback system is stable only if both poles lie in the left half of the s plane.
312 INTERCONNECTION OF SYSTEMS

Thus we must factor the denominator quadratic to determine the conditions for
which the feedback system is stable.
To simplify our analysis, we’ll determine the values of A and B for which the
feedback system is stable for the special case in which a = 2 and B = 4. Substituting
these values of a and B into our expression for H(s), we obtain

H(s) =
A(s + 4) - A(s + 4) (10.2-13)
s2 + 6s + ( 8 +AB) - (s + 3)2 - (1 - A B )
with the RAC to the right of both poles. To obtain the poles, we factor the denomi-
nator quadratic as

s2 + 6s + ( 8 +AB) = (s + 3)2 - (1 - AB)


(10.2-14a)
= (s + 3 + 2/1-AB)(s + 3 - 2/1-AB)

Thus the two poles are at s = -3 f d m .The pole locations for several values
of AB are listed below:

Value of
AB Pole Location Pole Location
- 99 7 - 13
- 24 2 -8
-8 0 -6
0 -2 -4
1 -3 -3
2 -3 + j -3 - j
5 +
-3 j 2 -3 -j 2
17 +
-3 j 4 -3 -j 4
37 +
-3 j 6 -3 -j 6

Figure 10.2-2 is a plot of the pole locations in the s plane. Note that as AB
increases from a very large negative value to AB = 1, one pole moves from a
very large negative value of sigma to n = -3 while the other pole moves from a
very large positive value of sigma to n = -3. When AB = 1, there is a second-order
pole at cr = -3 because both poles are there. For AB > 1, the poles move parallel to
the o axis at n = -3; one pole moves up and the other moves down such that the
two pole locations are conjugates as they must be because the coefficients of the
polynomial, Eq. (10.2-14a), are real. Techniques to simplify the plotting the locus of
the poles in the s plane such as this example have been developed. It is called the
root locus technique, and computer software is available with which root-locus plots
can be generated.
Observe for our example that one pole is in the right half of the s plane if
AB < -8 and on the o axis if AB = -8, so that the feedback system is not
10.2 ANALYSIS OF THE FEEDBACK SYSTEM 313

Fig. 10.2-2 S-plane plot of the pole locations.

BIBO-stable if AB 5 -8. However, both poles are in the left half of the s plane if
AB > -8. Thus we conclude that the feedback system is BIBO-stable only if
AB > -8.
Also, from our discussion of the bandpass system in Section 9.4, observe that for
large values of AB, the feedback system is a bandpass filter with a bandwidth of
approximately 6rad/s and a center frequency of approximately d m .By
connecting two low-pass filters in a feedback arrangement, we have realized a
bandpass filter with a given bandwidth and a center frequency which can be adjusted
simply by varying the value of the gain product, AB.

Example 3 As another example, we turn the problem around and, instead of


determining the location of the feedback system poles as we did in the last two
examples, we determine a system in the feedback loop required for the poles of the
feedback system to be at certain desired locations. For this example, let the unit-
impulse response of the system in the feedfonvard path be h,(t) = 4e-'u(t), and we
choose the unit-impulse response of the system in the feedback path to be
+
hh(t) = A 6 ( t ) Be-Pfu(t). The corresponding system functions are

4 B
H,(s) = ~

s+l'
rT>- 1 and H&) = A + s+B' > -B (10.2-15)

Substituting in Eq. (10.2-6), the expression for the feedback system function, H(s),
is

(10.2-16)
314 INTERCONNECTION OF SYSTEMS

We shall determine the values of the constants A, B, and p in the expression for
hb(t) such that the zero of the feedback system function is at s = -9 and the two
poles of the feedback system are at s = - 8 + j 4 and s = -8 -j4. From
Eq. (10.2-16), we observe that the zero is at s = -p. Thus we require /?= 9. To
determine the constants A and B, we note that for the desired conjugate pole loca-
tions, the denominator of H(s) must be

(S + 8 - j 4 ) ( ~+ 8 +j4) = + 8)2 + (4)2 = s2 + 16s + 80


(S ( 10.2- 17)

This must be the denominator of Eq. (10.2-16). That is, we require

s2 + (1 + B + 4A)s + (p + 4B + 4Ap) = s2 + 16s + 80 (10.2-18)

This equation is true for all values of s only if the polynomial coefficients are equal.
Thus we have two equations that must be satisfied:

1 + p + 4 A = 16 and p + 4 B + 4 A p = 80 (10.2-19)

We have already determined that j = 9 for the zero of H(s) to be at s = -9.


Substituting this value in Eq. (10.2-19), we have

10+4A=16 and 9 + 4 B + 3 6 A = 8 0 (10.2-20)

The simultaneous solution of these two equations for A and B is A = 1.5 and
B = 4.25. Thus the feedback system function, H(s), will have the desired poles
and zero if the LTI system with the unit-impulse response

(10.2-2 1)

is in the feedback path.

Example 4 The system inverse of an LTI system is discussed in Section 8.2. The
system inverse can sometimes be realized in the form of a feedback system. As an
example, consider the problem of eliminating ghosts on a TV screen. Ghosts are the
result of multipath. The received TV signal may arrive from the transmitting antenna
via many paths due to it being reflected from various objects such as buildings and
mountains. This results in ghosts on the TV screen because the travel time to the
receiver is slightly different for each path. To illustrate the use of feedback to
eliminate ghosts, we first consider the case in which there is only one reflected wave
that is reflected without distortion. The received signal then is

y(t) = x(t) + Ax(t - T ) (10.2-22)


10.2 ANALYSIS OF THE FEEDBACK SYSTEM 315

where T is the time delay due to the extra path length and IAl < 1 because, from
energy considerations, the magnitude of the reflected wave is smaller than the
incident wave. With the use of Table 7.4-1, the Laplace transform of this expression
is

with the region of convergence being that of x(t). Thus we can model the received
signal, y(t), as the response of an LTI system with the system function,

G(s) = 1 + A C T (10.2-24a)

Note that this system function is not a rational function of s. The algebraic expres-
sion for the system function of the system inverse is

1 1
H(s) = ~ = (10.2-24b)
G(s) 1+ A c S *

By comparing this system function with that of a feedback system, Eq. (10.2-6), we
note that H(s) can be realized as a feedback system with

H,(s) = 1 and Hb(s)= A P T (10.2-25a)

for which

h,(t) = h(t) and hb(t) = Ah(t - T ) (10.2-25b)

The realization of this system, which will eliminate the ghost, is shown in Fig.
10.2-3.
For the feedback system to be causal, we choose the RAC to be to the right of all
the poles. Thus for stability we require all the poles to be in the left half of the s
plane. Note that the poles of H(s) are the zeros of G(s), which are those values of s
for which

1 +Ae-ST = 0 (10.2-26)

............_.................................................
__.
~ Y(r)

Fig. 10.2-3 Feedback system realization of Eq. (10.2-25).


316 INTERCONNECTION OF SYSTEMS

We determine the solutions of this equation by equating the real and the imaginary
parts of this equation as follows. First express s as s = (T +jo to obtain

Thus we require the values of (T and o that satisfy this equation. For this, consider
the case for which A > 0. Equation (10.2-27) then requires

e-jmT = -1 (10.2-28a)

and

The first equation requires W T to be equal to an odd number of rc radians:

o T = (2n + 1)n for n = 0, f l , f 2 , . . .

or

(10.2-29a)

The correspondmg values of (T are the solutions of Eq. (10.2-28b), which are

1
CJ = -1nA (10.2-29b)
T

Thus we note that there are an infinite number of poles of H ( s ) located on a line
parallel to the o axis at (l/T)lnA +j[(2n +
l)rc/Tl for n = 0, f l , f 2 , . . . . Note
that the determination of the poles required the solution of Eq. (10.2-26), which is a
transcendental equation with an infinite number of roots. In accordance with our
discussion in Section 8.2, the feedback system is causal and stable only if (AI < 1
because In IAl < 0 only if IAl < 1. The ghost described by Eq. (10.2-22) would be
eliminated if the feedback system were connected at input of the TV receiver.

Return now to Eq. (10.2-6), the general equation for the system function of a
feedback system. Let the system hnctions in the feedfonvard and feedback path be
rational functions of s. The system functions then can be expressed as

(10.2-30a)

where N ( s ) is the numerator polynomial and D(s) is the denominator polynomial of


the system function. Thus the zeros of N(s) are the system function zeros, and the
10.3 THE ROUTMURWITZ CRITERION 317

zeros of D(s) are the system function poles. Substituting these expressions into
Eq. (10.2-6), we obtain

Observe from this expression that the zeros of H(s) are the zeros of Ha@)and also
the poles of Hh(s).Note that the location of the zeros of H(s) do not change as A and
B are varied. The location of the poles of H ( s ) are a function of AB; they can be
determined by determining the roots of the denominator polynomial. For the
common case in which the total number of zeros of systems A and B does not
exceed the total number of their poles, the degree of the denominator polynomial is
observed to be equal to the sum of the degrees of the denominator polynomials of
systems A and B, so that the number of poles of the feedback system is equal to the
number of poles of system A plus the number of poles of system B.
It is not unusual for a system function to have three or more poles, so that we
expect many feedback systems to have at least several poles. It would be nice to
determine the locations of the feedback system poles as in our examples. However,
while formulas exist for the roots of second, third-, and fourth-degree polynomials,
no general formulas exist for the roots of higher-degree polynomials. The reason for
the nonexistence of a general formula for a polynomial of degree higher than four is
not that no one has been sufficiently clever to determine it, but rather because it can
be shown that no such formula can exist. That is, even though, as discussed in
Appendix A, the fundamental theorem of algebra assures us that a polynomial of
degree n has exactly n roots, it can be shown that there can be no general formula for
the roots of a polynomial of degree higher than four. Of course, the feedback system
pole locations can be determined by using the computer to determine the polynomial
roots. One efficient procedure utilizes the root-locus technique mentioned above.
However, if we only need to know whether the feedback system is stable, we do not
need to know the exact locations of the feedback system poles. In accordance with
our discussion in Section 8.2, we only need to know whether all of the feedback
system poles lie in the left half of the s plane. If they do, then the causal feedback
system is stable; if not, then the causal feedback system is not stable. An efficient
procedure has been developed to determine whether all the poles lie in the left half of
the s plane. The procedure is called the Routh-Hunvitz algorithm, which we discuss
in the next section.

10.3 THE ROUTH-HURWITZ CRITERION

If we only desire to know whether a causal system is stable, then it is not necessary
to determine the specific system pole locations. In accordance with our discussion in
Section 8.2, we just need to determine whether all of the feedback system poles lie in
the left half of the s plane. The efficient procedure mentioned in the last section by
318 INTERCONNECTION OF SYSTEMS

which one can determine whether all the poles lie in the left half of the s plane is
called the Routh-Hurwitz algorithm,' which is simple to implement on a computer.
With this algorithm, we can determine how many roots of a polynomial lie in the left
half of the s plane, how many lie on the w axis, and how many lie in the right half of
the s plane. The various derivations of the algorithm do not contribute anything to
our discussion, so that we shall only discuss the algorithm.

10.3A Preliminary Polynomial Tests


To begin, express the nth-degree denominator polynomial of H(s) as
D(s) = aosn + a's"-' + a2s"-2 + . . . + a,-'s + a, (10.3-1)
in which all the coefficients are real numbers. Note that a. # 0 because then D(s)
would not be a polynomial of degree n. The roots of D(s) are the poles of H(s). Thus
we desire to determine whether all the roots of D(s) lie in the left half of the s plane.
We shall use the Routh-Hurwitz criterion for this determination. However, before
using the Routh-Hurwitz algorithm described below, there are two simple necessary
(but not sufficient) conditions that the coefficients of D(s) must satisfy for all the
roots to lie in the left-half of the s plane. They are as follows:

1. All the coefficients, a,,, a , , . . . , a,, must be nonzero.


2. All the coefficients, ao, a ' , . . . , a,, must be either positive or negative.

If either of these two conditions is not satisfied, then there is at least one root of D(s)
in the right half of the s plane or on the w axis, so that the system is not stable. Note
that these two conditions are necessary but not sufficient, so that it is still possible
that the system is not stable even though the two conditions given above are satisfied.
However, because these two conditions are so simple, it is best to determine whether
they are satisfied before proceeding with any test. If the two conditions are satisfied,
then Routh-Hurwitz criterion is used to determine whether all the roots are in the left
half of the s plane. For example, consider the following polynomials:
D,(s) = s2 +9 (10.3-2a)
D&) = s2 + 2s - 3 (10.3-2b)
D,(s) = s4 + 3s3 + 6s2 + 38s + 60 (10.3-2~)
All the roots of D,(s) do not lie in the left half of the s plane because al = 0, so that
condition 1 above is not satisfied (the roots of D,(s) arej3 and -j3). Similarly, all the
roots of D&) do not lie in the left half of the s plane because condition 2 above is
not satisfied (the roots of D6(s)are -3 and 1). However, all the roots of D,(s) also do
not lie in the left half of the s plane even though both conditions above are satisfied

'Equivalent algorithms were developed independently by E. J. Routh in 1877 and by A. Hurwitz in 1895.
We use both names in order to recognize the important contributions of both men. However, the specific
algorithm described above is the Routh form of the algorithm because it is easier for us to use.
10.3 THE ROUTtCHURWlTZ CRITERION 319

(the roots of D,(s) are -2, -3, 1 +j3, and 1 -j3). For polynomials such as D,(s),
the Routh-Hurwitz criterion must be used to determine whether all the roots lie in
the left half of the s plane.
If condition 2 is satisfied, then all the coefficients of D(s) are either positive or
negative. If they are all negative, then the coefficients of -D(s) are all positive and
its roots are the same as those of D(s). Thus, we need only consider the case for
which all the coefficients of D(s) are positive. For this reason, we shall assume that
all the coefficients of D(s) are positive in our description of the Routh-Hurwitz
criterion.

10.3B The RoutMurwitz Algorithm


As discussed above, we consider all the coefficients of D(s) given by Eq. (10.3-1) to
be positive. To apply the algorithm, we first form the Routh array shown below:
: a. a2 a4 a6 a8 ... t First row
: al a3 as a7 a, ... t Secondrow
: b1 b2 b, b4 bs ...
: CI c2 c3 c4 cs ...
.. . . .
. .. .. .. (10.3-3)
S2 : x, x 2 0 0 0
SI : Yl 0 0 0 0
SO : z1 0 0 0 0
t t
Row label First column
The first and second row-the rows labeled s" and s("-')+f the Routh array are the
coefficients of D(s). Note that the coefficients with even-numbered subscripts are in
the first row, labeled s"; and the coefficients with odd-numbered subscripts are in the
second row, labeled s"-'. The entries for coefficients with subscripts larger than n are
zero. The numbers in the third row, labeled snV2,are determined from the following
determinants of the numbers in the first two rows:

(10.3-4a)

( 10.3-4b)

(10.3- 4 ~ )

(10.3-4d)
320 INTERCONNECTION OF SYSTEMS

The numbers in the fourth row, labeled s " - ~ , are determined in the same manner
from the determinates of the numbers in the two rows above it. Thus

(10.3-5a)

(10.3-5b)

(10.3-5~)

The numbers in each succeeding row are determined in the same manner from the
determinates of the numbers in the two rows above it until the last row, the (n 1)th +
row labeled so, in which the only one nonzero number is

(10.3-6)

The reason for the row labels will be discussed later. The shape of the Routh array
will be seen to be triangular. A usefd fact in developing the Routh array is that any
row can be multiplied or divided by a positive number in order to simplijj the
numerical calculation without altering the results of the Routh-Hurwitz criterion.

The criterion states that the number of roots of D(s) that lie in the right half of the s
plane is equal to the number of changes of sign of the numbers in the first column of the
array.

Thus, the necessary and sufficient condition that all the roots of D(s) lie in the left
half of the s plane is that all the numbers in the first column be positive. A special
case is one in which one of the calculated values in the first column is zero. Before
discussing this special case, we'll consider some examples that are not special cases.
As our first example, consider the polynomial D,(s) given by Eq. (10.3-2c). The
Routh array for this polynomial is

s 4 : 1 6 6 0 0
s 3 : 3 3 8 0 0
s2 : -y 60 0 0
S I : 65 0 0 0
s o : 60 0 0 0
10.3 THE ROUTH-HURWITZ CRITERION 321

The numbers in the row labeled s2 are obtained from Eq. (10.3-4) as follows:

38

and the number in the row labeled SI is obtained from Eq. (10.3-5) as follows:

There are two sign changes in the first column of the array; one from 3 to -2013 and
the other from -2013 to 65. Thus, in accordance with the Routh-Hunvitz criterion,
there are two right-half-plane roots of D,(s). Because there are a total of four roots,
the other two roots must lie in the left half of the s plane. As stated above, the roots
+
of D,(s) are -2, -3, 1 j3, and 1 -j3, which agrees with the result obtained using
the Routh-Hurwitz criterion.
As a second example, we'll determine the conditions that are necessary and
sufficient that all the roots of a cubic polynomial lie in the left half of the s plane.
For this, we consider the polynomial

+
~ ( s=) aOs3 a,s2 + a2s + a3 (10.3-7)

The Routh array for this polynomial is

In determining this Routh array, the third row was multiplied by the positive number
u l , which, as stated above, does not alter the results of the Routh-Hunvitz criterion.
Because all the coefficients are assumed positive, we note that all the entries in the
first column are positive if a0a3< ala2. This then is a necessary and sufficient
condition that all the roots of a cubic lie in the left half of the s plane.
As a third example, consider the sixth-degree polynomial

D(s) = s6 + 6s5 + 5s4 + 4s3 + 3s2 + 2s + 1 (10.3-8)

The first two rows of the Routh array for this polynomial are

s6 : 1 5 3 1 0
s5 : 6 4 2 0 0
322 INTERCONNECTION OF SYSTEMS

The next row is

26-
s 4 : - l6 1 0 0
6 6

However, this row can be multiplied by a positive number which, as stated above,
does not alter the results of the Routh-Hurwitz criterion. Thus we multiply this row
by 3 to obtain

s4: 1 3 8 3 0 0

We then continue determining the Routh array to obtain for the first four rows:

s 6 : 1 5 3 1 0
s 5 : 6 4 2 0 0
s 4 : 1 3 8 3 0 0
s3 : 3 fi0 0 0

Again, without altering the results of the Routh-Hurwitz criterion, the numbers in
the fourth row can be made more convenient for calculations by multiplying them by
the positive number 13/4 to obtain

s 6 : 1 5 3 1 0
s 5 : 6 4 2 0 0
s4 : 13 8 3 0 0
s 3 : 1 2 0 0 0

Continuing, we obtain the Routh array

s 6 : 1 5 3 1 0
s 5 : 6 4 2 0 0
s 4 : 1 3 8 3 0 0
s 3 : 1 2 0 0 0
s2 -18 3 0 0 0
s : 1 3 0 0 0 0
s o : 3 0 0 0 0

Again, for convenience without altering the results of the Routh-Hurwitz criterion,
the row labeled s was multiplied by the positive number, 6.
There are two sign changes in the first column; one from 1 to -18 and the other
from - 18 to 13. Thus, in accordance with the Routh-Hurwitz criterion, there are two
right-half-plane roots of D(s). Because there are a total of six roots, the other four
roots must lie in the left half of the s plane. This agrees with the actual root locations,
which are -5.1623, -7.025, -0.3786 fj0.5978, and +0.3110 fj0.6738.
10.3 THE ROUTMURWITZ CRITERION 323

We now consider the special case in which a zero occurs in the first column of the
Routh array. There are two types of this case to consider:

1. The first entry of a row is zero, and at least one of the other entries of that row
is nonzero.
2. All the entries of a row are zero.

For the first special case, the procedure is to replace the zero in the first column by
E,which has a small positive value, and let E + O+. For example, consider the
polynomial

+
D(s) = s4 s3 + s2 + s + 2 (10.3-9)

The first three rows of the Routh array for this polynomial are

s 4 : 1 1 2 0
s 3 : l l O O
s 2 : 0 2 0 0

Because the first entry of the third row is zero and not all values of the row are zero,
we replace the zero in the first column by E , which has an arbitrarily small positive
value, and proceed to compute the rest of the Routh array, which is

s 4 : 1 1 2 0
s 3 : 1 1 0 0
2 : E 2 0 0

s o : 2 0 0 0

For very small positive values of E , the value of the first entry in the fourth row is
negative, for which there are two sign changes in the first column: one from E to
and the other from ( E - 2 ) / ~to 2. Thus the polynomial has two roots in the
(E - 2 ) / ~
right half of the s plane and, because there are a total of four roots, the other two
roots are in the left half of the s plane, which agrees with the following root
locations: -0.9734 fj0.7873 and +0.4734 &j1.0256.
The second special case is the one in which all the entries of a row are zero. This
occurs only when there are roots that are symmetric relative to the origin. A pair of
roots are symmetric relative to the origin if one is at s = so = go + j w o and the other
is at s = -so = --go -joo. Because the complex roots of a polynomial with real
coefficients must occur in conjugate pairs, this means that if a root at s = so is
symmetric relative to the origin, then the polynomial would have roots at
s = fo, & j w o , so that, if go # 0 and wo # 0, the roots would be symmetric relative
to the 0 axis and also the w axis. Such a case would be unusual. The more usual case
324 INTERCONNECTION OF SYSTEMS

of symmetric roots for a polynomial with positive coefficients is one for which
go = 0, so that there are roots on the o axis at s = fjo,. As we shall see, a
great deal of information about the roots can be obtained when all the entries of a
row are zero.
To discuss this special case, we shall consider a specific example and generalize
our discussion. For this, consider the polynomial

D(s) = s4 + 3s3 + 6s2 + 12s + 8 ( 10.3- 10)

The first two rows of the Routh array for this polynomial are

s 4 : 1 6 8 0
s3 : 3 1 2 0 0

Before proceeding, we divide the second row by 3 to simplify the numbers and
continue:

s 4 : 1 6 8 0
s3 : 1 4 0 0
s 2 : 2 8 0 0

We again simplify the numbers by dividing the third row by 2 and then continue:

s4 : 1 6 8 0
s3 : 1 4 0 0
s 2 : 1 4 0 0
s ' : o o o o

All the entries in the row labeled s are zero. We would have obtained this same result
without simplifying the numbers in the table. When all the entries in a row are zero,
we form a polynomial, called the auxiliary polynomial, from the entries in the row
above the row of zeros. The degree of the auxiliary polynomial is given by the row
label, the polynomial only contains every other power of s, and the entries in the row
are the polynomial coefficients. For our case, the row is the one labeled s2.Thus the
+
auxiliary polynomial for our case is p(s) = s2 4. An important property of the
auxiliary polynomial is that its roots are the symmetric roots that caused the row
below it to contain all zeros. The roots of the auxiliary polynomial for our case is
s = f j 2 , so that D(s) has these symmetric roots. We now can complete the Routh
array to determine if any of the other roots lie in the right half of the s plane. For this,
the row of zeros is replaced by the coefficients of the derivative of the auxiliary
10.3 THE ROUTH-HURWITZ CRITERION 325

polynomial. The derivative of the auxiliary polynomial for our case is p'(s) = 2s, so
the completed Routh array is

s 4 : 1 6 8 0
s3 : 1 4 0 0
s 2 : 1 4 0 0
s : 2 0 0 0
s O : 4 0 0 0

Because all the entries in the first column are positive, we conclude that there are no
roots in the right half of the s plane. Our result agrees with the actual root locations
of D(s), which are -1, -2, and f j 2 .
Except for the special case of symmetric roots, the Routh-Hurwitz criterion does
not directly enable us to determine the exact root locations of a given polynomial.
The criterion, however, can be used to determine the real part of each root, which is
its distance from the w axis, to any desired degree of accuracy.2 For this, replace s
+
with s a in D(s), the general nth-order polynomial given by Eq. (10.3-1). If
+ +
D(s) = 0 for s = p , then clearly D(s a ) = 0 for s a = p or, equivalently, for
s = -a + p . Thus we observe that the roots of the resulting polynomial, D(s a), +
will then be the roots of D(s) shifted to the left by a units (or to the right by -a
units). The procedure, which is simple to implement on a computer, then is to
determine the shift a for which the roots of the resulting polynomial lie on the w
axis. This is accomplished by using the Routh-Hurwitz criterion to determine the
number of roots in the right half of the s plane of the polynomial D(s a). Now, +

The polynomial coefficients of D(s + a ) , p,, in terms of the coefficients of D(s), a,,
of Eq. (10.3-1) are

(10.3-12)

To illustrate my procedure with an example that can easily be checked by hand,


consider the second-degree polynomial

D(s)=s2+4s+16

'It is important to know more than how to utilize a specific procedure. An understanding of the concepts
involved lends deeper insight of the procedure and sometimes can be utilized to generalize and to obtain
more information than previously conceived. I've developed this procedure mainly to illustrate this.
326 INTERCONNECTION OF SYSTEMS

The roots of this polynomial are s = -2 f j2, so that it has no roots in the right half
of the s plane. However, we’ll determine this by the technique described above. Now

D(s + a) = B 2 s 2 + BlS + Bo
in which, from Eq. (10.3-12),

2-2
(2 + k ) ! k
82 = ‘2fk 2 !k!a = (a,)(l)(a 0) = a2 = 1
k=O

81 =
2- 1

‘l+k
(1
l
!k
+ k)!
!ak = (q)(l)(aO)+ (a2)(2)(a) = 4 + 2a
k=O

and

BO =
2-0
aO+k
(0 + k)!
O!k!ak = (a& l)(a0) + (al)(l)(a) + (a2)(1)(a2)= 16 + 4a + a2
k=O

We fist try a = -5 to shift the roots to the right by +5 units. Then, from the
Routh-Hunvitz criterion, we determine that D(s - 5) = s2 - 6s 21 has two roots +
in the right half of the s plane. Thus we know that the real part of the roots of D(s)
must be between -5 and 0. Next, choose a = -5/2 in order to half the range. Using
the Routh-Hunvitz criterion, we determine that D(s - 5/2) = s2 - s 12.25 again +
has two roots in the right half of the s plane. The real part of the roots of D(s) thus
must lie between -5/2 and 0. We next half the possible interval by choosing
a = -5/4. Using the Routh-Hunvitz criterion, we determine that D(s - 5/4) =
s2+ +
1.5s 12.5625 has no roots in the right half of the s plane. Thus the real
part of the roots of D(s) must lie between -5/2 and -5/4. We again half the
possible interval by choosing a = -15/8. Using the Routh-Hunvitz criterion, we
determine that there are no roots of D(s - 15/8) in the right half of the s plane.
Consequently, the real part of the roots of D(s) must lie between -5/2 and - 15/8.
Observe that the range in which the real part of the roots can lie is reduced by a
factor of 2 each time. By continuing in this manner, the real part of the roots can be
determined to any degree of accuracy. With a computer, this is simply a DO-loop.
As our last example, the application of the Routh-Hunvitz criterion to determin-
ing the stability of a feedback system will be illustrated. For this, consider the
feedback system shown in Fig. 10.2-1 in which

1
HJs) = o>o ( 10.3-13a)
~

s2(s + 5) ’
10.3 THE ROUTICHURWITZ CRITERION 327

and

s+l
) X K ,
H ~ ( s= c> -2 (10.3- 13b)

Both systems are causal because the RAC is to the right of all the poles for each
system. Thus, in accordance with our discussion in Section 10.2, the feedback
system is causal. However, note that system A is not stable because the w axis
does not lie in the RAC. We desire to know the values of K , if any, for which the
feedback system is stable. From Eq. (10.2-6), the feedback system function is

H(s) =
s2(s + 5) (10.3-14a)
1 S + l K
l+--
s2(s+5) s + 2

Clearing fractions, we obtain

s+2
H(s) = (10.3-14b)
fi + 7s3 + 10s2 +K.s +K
The RAC for H ( s ) is to the right of all its poles because the feedback system is
causal. For it also to be stable, the w axis must lie in the RAC. Thus, for the system
to be both causal and stable, all the poles must lie in the left half of the s plane in
accordance with our discussion in Section 8.2. We use the Routh-Hunvitz criterion
for this determination. The feedback system poles are the roots of the denominator
polynomial

D(s) = s4 + 7s3 + 10s2 + KS + K (10.3-15)

The first three rows of the Routh array for this polynomial are

s4 : 1 10 K 0
s 3 : 7 K O 0
70 - K
s2 : ~ K O 0
7
We multiply the third row by 7 in order to simplify calculations and continue:

s4 : 1 10 K 0
s3 : 7 K 0 0
s2 : 70-K IK 0 0
(70 - K)K - 49K
s : 0 0 0
70-K
so : 7K 0 0 0
328 INTERCONNECTION OF SYSTEMS

The fourth row was not simplified by multiplying it by (70 - K). The reason is that
we can only multiply a row by a positive constant without altering the results of the
Routh-Hurwitz criterion. Because (70 - K) would not be positive if K > 70, we
would only able to apply the criterion for values of K less than 70. Now, all the roots
of D(s) lie in the left half of the s plane if and only if all the entries in the first column
of the Routh array are positive. From the array above, we thus require 70 - K > 0,
(70 - K)K - 49K = (21 - K)K > 0, and K > 0. All these conditions are met only
if 0 < K < 21. Thus we conclude that the feedback system is stable only if
0 < K < 21.
Note from the Routh array that all the entries of the row labeled so are zero if
K = 0. From the above discussion of the second special case, this means that there
are symmetric roots of D(s) if K = 0. Of course, Hb(s)= 0 for K = 0 so that, from
Eq. (10.2-6), H(s) = H,(s). This also can be seen by observing from Fig. 10.2-1 that
if Hb(s) = 0, then z(t) = 0, so that there is no feedback and H ( s ) = H,(s). The
symmetric root for K = 0 is the double root of H,(s) at s = 0.
We also expect roots on the w axis for K = 21 because all the roots are in the left
half of the s plane for K < 2 1 and there are roots in the right half of the s plane for
K > 2 1. This means that some roots crossed the w axis, so that there must be roots
on the w axis for K = 21. The first four rows of the Routh array for K = 21 are

s4 :1 10 21 0
s 3 : 7 21 0 0
s2 : 49 147 0 0
s : o 0 0 0

+ +
The auxiliary equation isp(s) = 49s2 147 = 49(s2 3). The roots of this equation
are s = & j d . These then are the w-axis roots of D(s) for K = 2 1 (the other two are
at -5.791 3 and - 1.2087); thus, as K increases, the locus of the roots crosses the w
axis at w = *jd.
In our example, the determination of the values of K for which the feedback
system is stable was easily determined. For some feedback systems with more poles
and zeros, the determination of the values of K for which the system is stable may
not be as simple. However, the Routh-Hurwitz criterion is easily programmed on a
computer so that the system stability can be determined quickly for a large number
of values of K. From this, the range (or ranges) of K for which the system is stable
can be obtained.

10.4 SYSTEM BLOCK DIAGRAMS

The block diagram model of a physical system often is the interconnection of several
subsystems. For example, consider the physical system shown in Fig. 10.4-1. As
shown, the system consists of three blocks with masses M , , M2, and M3. The blocks
are connected by springs with spring constants K, and K2. Also, there is friction
between the blocks and the surface. The friction is assumed to be sliding friction, so
10.4 SYSTEM BLOCK DIAGRAMS 329

Fig. 10.4-1 Physical system for the example.

that the frictional force is proportional to the block velocity. The positions of the
blocks are x l ( t ) ,x2(t), and x3(t) as shown on the figure. The system input is the
external force, f ( t ) , and the system output is the position of the third block, x3(t).
The force equation for the first block is

The force equation for the second block is

and the force equation for the third block is

To obtain a block diagram of the given system, we take the Laplace transform of the
above system equations. We work with the Laplace transform of the differential
equations since they are algebraic equations which can be manipulated easily. The
transformed equations are

and

A block diagram representation of Eq. (10.4-2a) is shown in Fig. 10.4-2a. To


complete this diagram, we need X2(s). This is obtained from the second equation,
Eq. (10.4-2b), by solving it for X2(s)as

(10.4-3a)
330 INTERCONNECTION OF SYSTEMS

I
I I

I I

(c)

Fig. 10.4-2 Block diagrams for Eqs. 10.4-2.

A block diagram for this equation is shown in Fig. 10.4-2b. The input, X,(s), is
available as the output of the system shown in Fig. 10.4-2a. However, to complete
this diagram, we need X3(s).This is obtained from the third equation, Eq. (10.4-2c),
by solving it for X3(s)as

(10.4-3b)

A block diagram for this equation is shown in Fig 10.4-2c.


We now can combine the three block diagrams to obtain the system block
diagram shown in Fig. 10.4-3. Note that the RAC for the system fimction in each
block as well as for the system function of the complete system is to the right of all
the system function poles because the physical system is causal.
The system block diagram contains two feedback paths. The position of the
second block, x2(t), affects the position of the first block, x l ( t ) , because of the
spring connecting the two blocks. The model of this effect is the feedback path

Xl(d
I
I K.
A

Fig. 10.4-3 Block diagram of the physical system.


10.4 SYSTEM BLOCK DIAGRAMS 331

with the gain K,. Furthermore, the position of the third block, x3(t), affects the
position of the second block, x2(t), because of the spring connecting the two
blocks. The model of this effect is the feedback path with the gain K2. Thus this
model allows us to determine the specific effect that one block of the system has on
another. Therefore we can obtain a great deal of insight concerning the system
behavior from its block diagram. Note that, in general, if some system components
interact with other system components as in our example, then the system model will
contain embedded feedback loops.
To determine the system output for a given input of a system with embedded
feedback loops, we could solve the system equations simultaneously. For our
example, we could solve the system equations, Eqs. (10.4-2), simultaneously by
eliminating X,(s) and X2(s)and thus obtain an expression for X,(s) as

in which H ( s ) is the system function. The RAC for H(s) is to the right of all its poles
because the system is causal. However, we can obtain H(s) directly from the system
block diagram by a method called block diagram reduction.

10.4A Block Diagram Reduction


Block diagram reduction is a technique for reducing a given block diagram to an
equivalent one for which the system function is easily determined. The technique
makes use of the equivalence of certain block diagram modifications.
First, we can move a pick-off point backward by noting the equivalence of the two
diagrams shown in Fig. 10.4-4. The equivalence of these two systems is established
by noting for each diagram that

Also, we can move a pick-off point forward by noting the equivalence of the two
diagrams shown in Fig. 10.4-5. In the diagram, Hcl(s) is the inverse of system B, so
that H;'(s) = l/Hb(s) with a RAC which overlaps that of Hb(s). If system B is
causal, then its RAC is to the right of all its poles. The inverse of system B then can
also be chosen to be causal by choosing its RAC to be to the right of all the poles of
H;l(s). Because H;l(s) = l/Hb(s), these poles are the zeros of Hb(s), so that the
overlap of the RACs is to the right of all the poles and zeros of Hb(s). Again, the

Fig. 10.4-4
332 INTERCONNECTION OF SYSTEMS

Fig. 10.4-5

equivalence of these two systems is established by noting that Eq. (10.4-5) is satis-
fied for each diagram in Fig. 10.4-5.
We also can move summation points. A summation point can be moved forward
by noting the equivalence of the two diagrams shown in Fig. 10.4-6. The equivalence
of these two systems is established by noting for each diagram that

Also, we can move a summation point backward by noting the equivalence of the
two diagrams shown in Fig. 10.4-7. In the diagram, H-'(s) is the inverse of the
system so that H-'(s) = l/H(s) with a RAC which overlaps that of H(s). If the
system is causal, then, in accordance with our discussion concerning Fig. 10.4-5, the
RAC of its causal inverse is to the right of all the zeros of H(s). The equivalence of
the two systems shown in Fig. 10.4-7 is established by noting that Eq. (10.4-7) is
satisfied for each diagram.

Y(s) = H(s)X(s) Z(s) + (10.4-7)

Block diagrams are reduced by using these four system equivalents. As an exam-
ple, consider the system with the block diagram shown in Fig. 10.4-8 in which each
subsystem is causal. The overall system function, H(s), will be determined by block
diagram reduction. We shall do this by two different methods. Note that because
each subsystem is causal, the overall system is causal so that the RAC of H(s) is to
the right of all its poles.

Fig. 10.4-6

Fig. 10.4-7
10.4 SYSTEM BLOCK DIAGRAMS 333

The first method is obtained by noting that the summer in the block diagram is
equivalent to two summers as shown in Fig. 10.4-9 because, in each diagram,

Y(s)= X ( s ) - Z,(s) - Z*(s) (10.4-8)

Consequently, a block diagram that is equivalent to that of Fig. 10.4-8 is as shown in


Fig. 10.4-10.
The forward path of the equivalent system is seen to have a feedback system with
the system function

(10.4-9)

in which the RAC is to the right of all the poles of HJs) since the system is causal.
In consequence, the block diagram of Fig. 10.4-10 can be reduced to that shown in
Fig. 10.4-11. This reduced block diagram is seen to be a feedback system. The

Y(Si

Fig. 10.4-10
334 INTERCONNECTION OF SYSTEMS

Fig. 10.4-11

system in the forward path is the tandem connection of two systems, so that its
system function is

as shown in Fig. 10.4-12. Systemf is causal because systems e and b are causal.
Thus the RAC of Hf(s) is to the right of all its poles, which, from Eq. (10.4-lo), are
the poles of Hb(s)and H&). Another way of determining the RAC is to note that the
RAC of Hf(s) is the overlap of the RACs of He(s) and Hb(s). The RACs of systems b
and e are to the right of their respective poles because they are causal. Thus the
overlap of the RACs is to the right of the poles of Hb(s) and He@),which are the
poles of Hf(s).Thus the feedback system of Fig. 10.4-12 is causal with the system
function

(10.4-1 1)

and RAC to the right of all its poles. This also is the system function of the original
system of Fig. 10.4-8 because the block diagram of Fig. 10.4-12 is equivalent to that
of Fig. 10.4-8. To obtain the system function in terms of the subsystems of Fig.
10.4-8, we substitute Eqs. (10.4-9) and (10.4-10) in Eq. (10.4-11) to obtain

(1u.t-l.

Fig. 10.4-12
10.4 SYSTEM BLOCK DIAGRAMS 335

qp-p
Fig. 10.4-13

A second method of reducing the block diagram of Fig. 10.4-8 to obtain H(s) is
to begin by using the equivalence relation shown in Fig. 10.4-4 to move the input
pick-off point for system d from the output to the input of system b. The resulting
equivalent block diagram is shown in Fig. 10.4-13. Observe that the two feedback
paths are in parallel, so that, in accordance with our discussion in Section 10.1A, an
equivalent form of this block diagram is as shown in Fig. 10.4-14 in which

with the RAC of Hg(s)being to the right of all its poles because system g is causal.
This equivalent block diagram is observed to be the tandem connection of a feed-
back system with system 6. The system function of the feedback system, Hk(s), is

( 10.4-14)

with the RAC being to the right of all its poles because system k is causal. Thus the
system function of the equivalent block diagram is

(10.4-15)

We now obtain the system function in terms of the subsystems of Fig. 10.4-8 by
substituting the expression for Hg(s) from Eq. (10.4- 13):

( 10.4- 16)

Fig. 10.4-14
336 INTERCONNECTION OF SYSTEMS

with the RAC being to the right of all its poles because the system is causal. This
result is observed to be the same as that obtained previously, Eq. (10.4-12).
Whatever method of block diagram reduction is used, the same final result must
be obtained. However, the different methods do result in different sets of equivalent
block diagrams. For example, the first method described resulted in the equivalent
block diagrams of Figs. 10.4-10, 10.4-1 1, and 10.4-12. Each of these are equivalent
to the original block diagram of Fig. 10.4-8. The various equivalent block diagrams
lead to different ways of viewing the original system and thus can result in more
insighthl views of the effect of certain subsystems on the overall system. For
example, the equivalent block diagram of Fig. 10.4-12 can be used to obtain a
better understanding of the effect of system d on the overall system.

10.5 MODEL CONSISTENCY

As discussed in the introduction of this text, models are the substance of science
while system theory is the theory of models. Thus, system theory is basic to all
science. A model is used to understand the essence of the phenomenon modeled, and
it is also used to predict certain experimental results. However, approximations are
made to construct the model, so that the predicted experimental results differ from
those obtained with the physical system that was modeled. A smaller approximation
error can be obtained with a more complex model. Thus there is a trade-off between
model complexity and modeling accuracy. Often, the essence of a phenomenon can
be studied with a rather simple model. An example is the model of echoing
discussed in Chapter 1. Another example is that of the Earth’s orbit about the sun
which can be studied by a model consisting of a stationary spherical sun and an
orbiting spherical earth. From such a model, the basic properties of the Earth’s
elliptical orbit can be determined. However, to determine the Earth’s orbit with
greater accuracy, the model would have to be made more complex by including
the effects of such things as the Earth’s moon, other planetary bodies, and the
nonsphericity of the Earth and sun. The desired accuracy would determine which
to include in the model.
As in the examples above, approximate models are often constructed by idealiz-
ing the components of the model. The use of idealized elements in the model often
impose constraints on the model. It is important to recognize these constraints.
Nonsensical results can be obtained if they are not recognized and used. The analysis
of some practical examples illustrating these ideas are presented below.
We begin with a very simple example of the circuit shown in Fig. 10.5-1. In the
figure, a 4-V ideal battery is connected to a 2-V ideal battery by a switch, S. What
happens when the switch is closed? The surprising answer is that the question cannot
be asked because the switch cannot be closed! Why? Because, by definition, the
voltage across an ideal battery is fixed irrespective of what is connected across it. If
the switch were closed, we would be saying that 4 = 2! Thus the switch cannot be
closed in this model, and we cannot ask what happens when the switch is closed. Of
10.5 MODEL CONSISTENCY 337

Fig. 10.5-1

course you clearly could physically connect two real batteries as in the figure and
close the switch. The difference then is that you used real batteries and not ideal
ones. Real batteries have some internal resistance, so that a good model of a real
battery is an ideal battery in series with an ideal resistor. The model of Fig. 10.5-1
would then be modified by the inclusion of an ideal resistor connected in series with
each ideal battery whose value is equal to the internal resistance of the real battery;
also included would be a resistance with a value equal to the resistance of the wire
connecting the batteries. The switch then can be closed because the voltage differ-
ence across the switch is made zero by the voltage drop across these resistors.
A more interesting illustration of the above example is the circuit shown in Fig.
10.5-2, in which an ideal capacitor with a capacitance of C, and charged to a voltage
VI is connected via a switch, S, to an ideal capacitor with a capacitance of C2 and
charged to a voltage V2. What happens when the switch is closed? As in the
preceding example, the surprising answer is that the question cannot be asked
because the switch in this model also cannot be closed! Why? Because, at the instant
the switch is closed, the model requires that the voltage across the ideal capacitors be
the same. If the switch were closed, we would be saying that, at the instant of
closing, VI = V2;thus the inconsistency is similar to that of our previous example
with the batteries.
The problem is that we have considered all resistance to be zero as in our previous
example. What we really mean by zero resistance is the limit as the value of the
resistance approaches zero, which physically means that the value of the resistance is
exceedingly small. Let us then analyze this circuit by including only the resistance in
the wires connecting the capacitors. Let the value of this resistance be R. The model
then is as shown in Fig. 10.5-3. To analyze this circuit, let t = 0 be the time at which
the switch S is closed. Also let u , ( t ) and u2(t) be the respective voltages across the
capacitors C, and C,. Because the sum of the voltages about any circuit loop must
be zero, we have the following for t > 0:

) Ri(t) = 0
ul(t) - ~ 2 ( t - (10.5-1)

Fig. 10.5-2
338 INTERCONNECTION OF SYSTEMS

Fig. 10.5-3

Now, u l ( t ) = ql(t)/C1and u,(t) = q2(t)/C2,where ql(t)and q2(t)are the respective


charges on the capacitors C , and C,. Substituting these relations in Eq. (10.5-l), we
obtain

1 1
-qI(t) - -qq2(t) - Ri(t) = 0 (10.5-2)
C1 c
2

Because current is the rate-of-change of charge, we have that i(t) = qi(t) = -q’,(t).
Thus, by differentiating Eq. (10.5-2) and substituting these relations, we obtain the
following for t > 0:

+
Ri’(t)
[d +
- -
:2l
i(t) = O

or

(10.5-3)

This is a differential equation of the type discussed in Section 8.5. For its solution,
we require the initial condition, i(O+). For t > 0, we have from Eq. (10.5-1)

(10.5-4a)

Because the voltage across the ideal capacitors at t = O+ must be the same as at the
instant before the switch was closed, we then have from Eq. (10.5-4a)

i(O+) = VI - v2
~ (10.5-4b)
R

We now use Laplace transforms as discussed in Section 8.5 to solve Eq. (10.5-3)
with the initial condition given by Eq. (10.5-4b).

(10.5-5)
10.5 MODEL CONSISTENCY 339

The solution of this equation for Z(s) is

i(O+) 1
Z(S) = ~ 1' a>- (10.5-6a)
z
S+-
z

where i(O+) is given by Eq. (10.5-4b) and

(10.5-6b)

The inverse Laplace transform of Eq. (10.5-6a) is

i(t) = i(o+)e-'/'u(t) (10.5-7)

The charge on each capacitor now can be determined as

q l ( t )= Cl VI - 1: i(a) da, t >0 (10.5-8a)

and

(10.5-8b)

Without evaluating these integrals, note that the sum of the charges on the two
capacitors is

Thus we observe that charge is conserved as it must in any circuit because the sum is
a constant equal to the total initial charge on the capacitors.
Evaluating Eqs. (10.5-8) with the use of Eq. (10.5-7), we obtain

q l ( t )= C, V, + i(O+)z[l - e-"'], t>o (10.5-9a)

and

q2(t)= c2v2- i(O+)z[l - t>o (10.5-9b)

Substituting the value of i(O+) from Eq. (10.5-4b), we then have


340 INTERCONNECTION OF SYSTEMS

and

Cl c2 [I e-"'], t >0 (10.5-lob)


q 2 W = c2v2 +(VI - V2)- -
CI +c,

Now, the total energy dissipated in the resistor is


00

E =RS i2(t)dt (10.5-1 la)


-m

Substituting the expression for i ( t ) from Eq. (10.5-7), we obtain

e-2t/' dt = -
z Ri2(0+)
(10.5-llb)
2

We now substitute the values of z and i(O+) from Eqs. (10.5-6b) and (10.5-4b):

1 ClC2R (V, - V,), - 1 ClC2


E=-- R - -- (VI - V2I2 (10.5-1 IC)
2 C, +C, R2 2 c, c, +
The very interesting observation from this result is that the total energy dissipated in
the resistor is independent of its value. Thus, as R approaches zero, we have a
solution for which charge is conserved, but there is a loss of energy equal to the
amount given by Eq. (10.5-1 IC).
Thus it would seem at first glance that energy is not conserved in the zero
resistance case for which R = 0. This seeming paradox is often resolved by
saying that the energy is lost via radiation. Yes, there is some radiation loss in the
physical system. However, this "explanation" cannot be used because radiation loss
was not included in the model of Fig. 10.5-2. We can only use phenomena that are
actually included in the model to understand the results obtained from the model.
The correct explanation is that by R = 0 we mean the limit as R approaches zero,
which is equivalent to saying that R is exceedingly small and that the lost energy, E ,
is dissipated in the small resistance.
Another way of explaining why the model of Fig. 10.5-2 with zero resistance is
not valid is to observe from Eq. (10.5-7) that the current, i(t),is an exponential pulse
with the area

Q= 1; i(O+)e-'/' dt = i(O+)z (10.5-12a)

Physically, this is the total amount of charge transferred from one capacitor to the
other. Using Eqs. (10.5-4b) and (10.5-6b), its specific value is

Q=- clc2 [VI - V,] (10.5-12b)


CI + c2
10.5 MODEL CONSISTENCY 341

which is independent of R. Thus, as R -+ 0, the area of the current pulse, which is


equal to Q, doesn’t change but the width of the current pulse approaches zero
because, from Eq. (10.5-6b), z + 0 as R -+ 0. Thus, in accordance with our discus-
sion in Section 3.3, the current pulse becomes an impulse with area Q as R -+ 0. But
note from our discussion in Section 3.3 that the impulse width is infinitesimal but not
zero. Consequently, the charge Q is not transferred from one capacitor to the other in
zero time, but in an infinitesimal amount of time. This means that the capacitor
voltages do not change from their initial values to their final values in zero time but
rather in an infinitesimal amount of time because the value of the time constant, z, is
not zero but an infinitesimal. This is equivalent to saying that the value of R is an
infinitesimal. Note then that the source of the difficulty with the zero resistance
model of Fig. 10.5-2 is that it requires an impulse with zero width instead of the
correct view of one with infinitesimal width as we discussed in Sections 2.3 and 3.3.
This example is one illustration of how one can obtain incorrect results or paradoxes
by considering the width of the impulse to be zero. The nonzero width of the impulse
models the physical fact that, in accordance with relativity, nothing happens instan-
taneously in nature.
The above example illustrates one type of “paradox” that can occur. The model
of a feedback system offers an example of another type of “paradox” that can arise if
a physical system is not properly modeled. For this example, consider the model of a
feedback system as shown in Fig. 10.5-4. The equations of this model are

Y(t>= x(t>+ z(t> (10.5-13a)

and

Eliminating z(t), we obtain

1
Y ( 4 = -44 (10.5- 14)
1-K
One would obtain this same result by using the feedback equation, Eq. (10.2-5). For
the special case in which K = 2, we have from this result that for a unit-step input
x(t) = u(t), the response is y(t) = -u(t). Although this indeed is a solution of the
system equations of the model, Eqs. (10.5-14), it is one that the physical system
never exhibits! The reason is that there always is some delay in any physical system
due to the fact that the size, d , of any component of the physical system is greater
than zero. Because no wave can travel faster than the speed of light, c, there must be
a delay of at least d l c seconds between the input and the output of any physical
component. Thus a proper model of any physical system must include some delay.
This delay could be exceedingly small but not zero. For example, if the component
size is about 3 cm, there must be a delay of about 1OW’’ s = 0.1 ns. Thus we must be
careful when modeling a component with zero delay because the solution of the
system model then may not be consistent with the solution of the physical one. The
342 INTERCONNECTION OF SYSTEMS

Fig. 10.5-4

solution of a system model with zero delay must always be obtained as the limit as
the component delays go to zero or, equivalently, with the component delays being
infinitesimally small.
To understand the effect of a small delay, let us examine the model of Fig. 10.5-4
in which a small delay, to,is inserted in the feedback path as shown in Fig. 10.5-5. In
accordance with our analysis in Section 1.6, the response, y(t), of this system for
K = 2 and the input, x(t) = u(t), is a staircase function with the value 2" - 1 in the
time interval (n - l)to < t < nto. With this result, we can examine the solution as
the delay time, to, tends to zero. The table below is a list of values of the system
output at t = 1/3 ps for various values of to.

to (seconds) tlto n y ( Y ) = 2" - 1

10-6 113 1 1 .O
10-7 113 x 10 4 1.5 x I O
10-8 113 x IO2 34 1.7 x 10"
10-9 113 x IO3 334 3.5 x 10'00
10-10 i/3 104 3334 4.3 101003

The output amplitude at t = 1/3 ps is seen to become arbitrarily large as the delay
time, to, becomes arbitrarily small. In fact, you should note that this is true for any
given value o f t because, for any given value oft, t/to increases as to decreases. Thus
we observe that the proper solution for zero delay is that y(t) is infinite. Generally,
the solution of any system model with zero delay must always be obtained as the
limit as the component delays go to zero or, equivalently, with the component delays
being infinitesimally small.
The examples in this section illustrate the care that must be taken in modeling a
physical system when certain components are idealized or some component values
of the physical system are modeled with zero value. We always must interpret zero as
the limit of the solution as the component value tends to zero or, equivalently, the

>
=
solution obtained by using an infinitesimal value for the component value. It is this

-
(t to)

Fig. 10.55
10.6 THE STATE-SPACE APPROACH 343

approach that was used to define the impulse in this text which prevents any para-
doxes involving the use of the impulse.

10.6 THE STATE-SPACE APPROACH

In this text, the basic relation used to express the output, y(t), of an LTI system in
terms of its input, x(t), is the convolution integral

y(t) = h(t)*x(t) =
rm h ( ~ ) x (-
t 7) dz

All our analyses of LTI systems derived from this basic relation. A relation of this
(10.6-1)

type is called a functional relation. A functional is a generalization of a function. As


we discussed in Chapter 1, a function is a mapping of numbers into numbers. That
is, when we write the functionf(t), we mean thatf(t) is a number that is obtained by
applying the rulef(.) to the number t. A functional is a mapping of functions into
numbers. For example, the convolution integral is a functional because we determine
the output value at any given time to as

(10.6-2)
-m

This integral involves the whole function x(t), so that it is a mapping of the function
x(t) into the number y(to).Because our analyses of LTI systems were developed from
the convolution integral, our formulation of LTI system analysis is referred to as a
functional theory of LTI systems.
As you might expect, there are many different methods of LTI system analysis
that can be used. Each method has advantages for certain types of applications. For
some applications, one method may be used for one part of the problem while
another method may be used for another part of the problem. We have discussed
a number of applications in this text for which the functional theory is very useful
and lends useful insight to the problem. Another method is the state-space method.
This method is especially useful for certain types of control problems such as the
control of a satellite in an orbit.
The state-space method will be described so that you can understand the essence
of the state-Variable theory of systems. However, we shall not discuss the state-space
method in detail because its discussion requires a development of a subject called
linear vector spaces, which is the mathematics required for a visualization and
understanding of the operations performed. There are many texts available which
present a good development of this a p p r ~ a c h . ~

See, for example, ( I ) Chen, Chi-Tsong. Linear System Theovy and Design, Holt, Rinehart and Winston,
1984; (2) Friedland, Bernard. Conhol System Design, An Introduction to State-Space Methods, McGraw-
Hill, 1986; and ( 3 ) DeCarlo, Raymond. Linear Systems, A State Variable Approach with Numerical
Implementation, Prentice-Hall, Englewood Cliffs, NJ, 1989.
344 INTERCONNECTION OF SYSTEMS

State-space theory is developed from a system description in terms of the state of


the system.
The state of a system is that information about the system which, at any given time,
t = to, together with the input for t to, enables one to determine the unique system
output for t 1 to.

Thus the state of an LTI system is the set of the system initial conditions. For
example, the state of an electric circuit can be simply the set of capacitor voltages
and the inductor currents because their values at any given time, t = to,together with
the circuit input for t 3 to, enables one to determine the unique circuit response for
t 3 to.The capacitor voltages and the inductor currents are then called state variables
of the circuit. It is quite immaterial how the capacitor voltages and inductor currents
were attained because that has no bearing on the hture behavior of the circuit. Thus,
the state of a system is a set of variables that completely characterize the effect of the
past history of the system on its hture behavior. Note then that the state variable
method describes only causal systems. Similarly, for a mechanical system, the state
variables are the positions and velocities of its masses. For example, for the state
variable description of a satellite in its orbit, the state variables of the satellite would
be its three coordinates of position in space, its three angular positions (its roll, pitch,
and yaw), and the associated velocities, so that there would be 12 state variables.
The set of variables that qualify as the state of a system is not unique. For
example, in a mechanical system, the positions and momenta of the masses also
would qualify as the state variables of the system. Observe that the system must be a
lumped parameter system for there to be a finite number of state variables. Because
this is the usual case, the systems mostly analyzed by the state-space approach are
lumped parameter causal systems.
To illustrate these concepts, the series R-L-C circuit shown in Fig. 10.6-1 will be
analyzed using the state variable approach. As discussed above, the state variables
can be chosen to be the capacitor voltage, u(t), and the inductor current, i(t). All the
circuit variables can be determined in terms of these two state variables. For exam-
ple, the voltage across the resistor is Ri(t) and the voltage across the inductor is
e(t) - Ri(t) - u(t). The equations governing the state variables are called the state
equations. For our example, the state equations are
i(t) = Cu’(t) (10.6-3a)
e(t) = Ri(t) + Li’(t) + u(t) (1 0.6-3b)

Fig. 10.6-1
10.6 THE STATE-SPACE APPROACH 345

Each of these equations involve the first derivative of a state variable. For the state
variable approach, each of these two equations are solved for this first derivative as

1
u'(t) = - i(t) (10.6-4a)
C
1 R
i'(t) = - - u(t) - - i(t)
L L
+ -L1 e(t) (10.6-4b)

These two equations are called the state equations of the system. They can be
expressed in the form of a matrix equation as

[ = [ _ _ _' R- ]
L L
[ + [ (10.6-5)

This is the matrix form of the state equation. In general, the state equation for any
lumped parameter LTI system in which there are n state variables andp inputs can be
expressed as a matrix equation in the form

~ ' ( t=) Ax(t)


~

~~
+ Bu(t) (1 0.6-6)

where x(t)
-
is a column vector of the n state variables called the state vector.

(10.6-6a)

In accordance with our discussion of the state of a system, the state vector can be
viewed as a running collection of initial conditions. The symbol A in Eq. (10.6-6),
called the state transition matrix, is an n by n matrix:

... ...

.. ... (10.6-6b)
.. ann

Also, u(t) is a column vector of the p inputs called the input vector:
~

(1 0.6-6~)
346 INTERCONNECTION OF SYSTEMS

Note then that the state-space description easily incorporates the description of
systems with several inputs. The last symbol in Eq. (10.6-6), B, is an n by p matrix

(10.6-6d)

Because the state vector has n components, the state of the system at any given
time can be viewed as a point in an n-dimensional space. The system behavior then
can be analyzed in terms of the position of this point as a function of time. The
mathematical theory of linear vector spaces is used in this analysis.
For our circuit example above, n = 2 andp = 1 with xI( t ) = u(t), x 2 ( t ) = i(t), and
ul(t) = e(t).The state equation, Eq. (10.6-6), can be solved for the state vector, x(t)
using matrix methods. Because matrix operations can be performed efficiently OK
computer, a computer can be used to perform the required calculations. This is one
attraction of the state variable approach.
To illustrate these concepts, we shall determine the state vector for our circuit
example. In the state-space method, Eq. (10.6-5) would be solved using matrix
theory. However, we shall determine e(t) and i(t) by solving Eqs. (10.6-4). For
this, we can obtain the Laplace transform of the two equations as discussed in
Section 8.5 and solve the resulting two algebraic equations simultaneously. Alter-
natively, by differentiating Eqs. (10.6-4) we can obtain one equation involving only
u ( t ) and a second equation involving only i(t). Using the latter method, we obtain

u”(t) + -RL u’(t) + -1 v(t) = 1 e(t>


LC
-
LC
(10.6-7a)

R 1 1
+ L
+
i”(t) - i’(t) -i(t) = - e’(t)
LC L
(10.6-7b)

These two equations can be solved using Laplace transforms as discussed in Section
8.5. For the case in which we choose to = 0, we obtain

V(s)= (10.6-Sa)
- R 1
sL +-s+-
L LC

I(s) = (10.6-8b)
s2 + -RLs + -1
LC
10.6 THE STATE-SPACE APPROACH 347

To express the solution only in terms of the state variables, we need to express u'(0)
and i'(0) in terms of u(0) and i(0).4 For this we have from Eqs. (10.6-4) that

1
d ( O ) = - i(O) (10.6-9a)
C
1 R 1
1(O) = - - u(O) - - i(0) + - e(O) (10.6-9b)
L L L

Substituting these relations into the solution for V(s) and I(s), we then have

V(s)=
-E(s)
LC (E)
R 1
c
1
+ s + u(0) + -i(O)
-

( 10.6-1Oa)
s2 + s +
- -
L LC

and

1 1
-sE(s) + si(0) -- u(0)
I(s) = L L (10.6-lob)
R 1
s2+-s+-
L LC

We obtain explicit expressions for the time fimctions by first completing the
square to express the denominator polynomial in the form

R
D(s) = s2 +-s
L
+ LC1 = (s + a )2 + 0 02
- (10.6-lla)

where

R
(10.6- 11b)

For our illustration, we shall assume a > 0 and mi > 0. Then, Eqs. (10.6-10) can be
expressed as

au(0) + 1 i(0)
-
V ( s ) = - 1 -+-
as) s+ +
c wo (10.6-12a)
LC D(s) D(s) 0 0 Do
4This step would not be required if the Laplace transform of Eqs. (10.64) were obtained and the resulting
two algebraic equations were solved simultaneously.
348 INTERCONNECTION OF SYSTEMS

and

(10.6-12b)

The reason for expressing the equations in this form is that we then can use entries
no. 4 in Table 7.4-1 directly without having to go through the route of obtaining
partial fraction expansions. Thus we obtain from entries no. 4 of Table 7.4-1 that the
initial condition response, the solution for e(t) = 0, so that E(s) = 0, is

+ -1 i(0)

and
au(0)

WO
C
1
sin(o,t) e-atu(t> (10.6-13a)

1
+
ai(0) - u(0)
L
0 0
1
sin(oOt) e-"u(t) (10.6-13b)

In accordance with our discussion, the state of the circuit at any given time can be
considered to be a point in a two-dimensional space with coordinates u(t) and i(t). To
illustrate a graph of the position of this point as a function oft, we consider the case
for which R = 10 0,L = lop3H, and C = lop6F. With these values, a = 5 x lo3
and wO = 31.225 x lo3, and thus for t 2 0 we obtain

( 10.6-14a)

and

i(O)cos(31.225 x 103t)+5i(0)+u(0)sin(31.225 x 103t) e-5x103t


3 1.225 1
(10.6-14b)

A graph of the position of this point as a function of t for the case in which
u(0) = 10 and $0) = -2 is shown in Fig. 10.6-2.
As t increases, the circuit state moves in a spiral from the point (10, -2) to the
point (0,O). This is an example of asymptotic stability mentioned in Section 3.6
because, with zero input, the system state approaches the origin asymptotically. Of
course, the motion of the point would be entirely different if the input were not zero.
10.6 THE STATE-SPACE APPROACH 349

20-

-40-

-2 -1.5 -1 -0.5 0 O S 1 1.5

Fig. 10.6-2 Graph of v(t) versus i(t). (Arrows point in the direction of increasing t ) .

For the general case in which there are n states, the state of a system given by
Eq. (10.6-6a) can be considered to be a point moving in an n-dimensional space
with coordinates x,,x2,. . . , x,. In this manner, the dynamic behavior of a system
can be visualized in terms of a point moving in an n-dimensional state-space. The
mathematical theory used for this study is called linear vector spaces, which makes
extensive use of matrix theory. With this formulation, the study of a dynamic system
can be viewed in terms of a study of its associated state space. Thus, the system is
asymptotically stable if, with zero input, the system state approaches the origin
asymptotically for any initial state. It is important to realize that it is possible for
a system to be asymptotically stable but not be BIBO-stable.
Control is a major application of state-space theory. The state-space theory of
control can be visualized in terms of the control of the state of a satellite in its orbit.
We saw above that there are 12 state variables for this system. “To control the
satellite” means to change its six position variables and six velocity variables
from one set of values to another. The control is accomplished by some jets attached
to the satellite. The forces exerted by these jets are the system inputs and constitute
the input vector. In state-space terms, this can be viewed as the input vector moving
the state of the system from one point in the state space to another. To control the
satellite, the input vector must be able to move it from any given point in the state
space to any other desired point in the state space within a finite amount of time. If
this can be accomplished, the satellite is said to be totally controllable. If only some
of the state variables can be controlled, then the satellite is only partially controllable.
In state-space terms, the controllable state variables are the coordinates of a subspace
of the state space. This subspace is called the controllable subspace. A controller
then can be designed to control the system in this subspace.
However, to control the state of the satellite, we first must know its present state.
The reason for this is that if we do not know where the satellite is in the state space,
how can we determine the path in the state space that should be taken to bring the
satellite to the desired point in the state space? That is, how can I determine the
350 INTERCONNECTION OF SYSTEMS

direction I should walk to go home if I don't know where I presently am located? For
this, we must observe some of the state variables of the system. For example, could
we determine the values of the satellite state variables if we just observe its position
in space? Clearly not, because we would not have sufficient information to deter-
mine, for example, its angular coordinates. The vector of observed variables is called
the output vector. If knowledge of the output vector and the input vector is sufficient
to exactly determine the values of all the system state variables, the system is said to
be totally observable. If the values of only some of the state variables can be
determined, then the satellite is only partially observable. In state-space terms, the
state variables that can be observed are the coordinates of a subspace of the state
space. This subspace is called the observable subspace. An observer then can be
designed to observe the system in this subspace.
Meaningful control can be accomplished only for those state variables that are
both observable and controllable. These state variables lie in the intersection (or the
overlap) of the controllable and observable subspaces. Thus, one problem in the
state-space approach is to determine the intersection of the controllable and obser-
vable subspaces. An observer and a controller for that subspace is then designed.
This is the essence of the state approach to control. As indicated above, a theo-
retical advantage of viewing a system as a moving point in an n-dimensional state
space lends a great deal of insight into the control problem. A practical advantage of
the state-space approach is that the theory easily incorporates several inputs and
several outputs. Also, because the mathematical operations are matrix ones, the
required calculations can be performed efficiently on a computer.
In control theory, the problem of determining an observer and a controller is often
complicated by the imposition of additional design criteria. One common criterion is
that only certain paths are allowed in transferring the system from one point to
another in the state space. For our satellite example, there would be a constraint
on the allowable acceleration in order to limit the forces on the satellite. Another
complication is the ever-present problem of noise. For this, a design is determined
for which the effect of the noise is minimized. Additionally, the system parameters
oRen are not known exactly. For example, the exact satellite mass and the exact
thrust of the control jets may not be known. For this, robust control theory is used. In
robust control, the control is designed to be insensitive to the slight errors in the
values of the system parameters used.

PROBLEMS

10-1 In the feedback system shown below, h,(t) = d ( t ) - 3eP3'u(t) and


h&) = u(t).
(a) Determine the feedback system function, H(s).
(b) For what values of the amplifier gain, K , is the feedback system stable?
PROBLEMS 351

Y U

Prob. 10.1

10-2 In the feedback system shown below, e(t) = x ( t ) - z(t). Also,

4
HJS) =-
s+l'
cs > 0 and Hb(s) = KO +-,s Kl
+2
cs > 0

(a) Determine the values of KOand K , required for the poles of the feedback
system to be at p , = -4 +jO and p 2 = -12 +jO.
(b) Determine the location of the poles and zeros of Hb(s) for the values
determined in part a.

Prob. 10.2

10-3 Consider the model of an echoing system shown in Fig. 1.6-3 with K > 0.
(a) Determine the system function, H(s), of the feedback system.
(b) Determine the pole locations. How many poles are there?
(c) For what values of K is the system stable?
(d) Let K = 0.9. For low frequencies, show that the system can be modeled
as a bandpass filter and determine its center frequency, 3-dB bandwith,
and Q.

10-4 Determine whether any of the roots of p(s) = s5 + 5s4 + 1is3+


14s2+ 10s + 4 lie in the right half of the s plane (RHP).

10-5 Determine whether any of the roots of s5 + 2s4 + 2s3 + 4s2 + s + 1 lie in the
right half of the s plane (RHP).

10-6 The system function of a causal feedback system is

A
H(s) =
s3 + + +K
4s2 4s

Determine the values of K for which the system is stable.


352 INTERCONNECTION OF SYSTEMS

10-7 The component systems of the the system below are causal LTI systems.
(a) Determine the system function, H(s), of the system with the input x(t)
and output y(t).
@) How is the RAC for H(s) determined? Your reason must be given.

Prob. 10.7

10-8 The component systems of the the system below are causal LTI systems with
the system functions H,(s), Hb(s),H,(s), Hd(s),and H,(s).
(a) Determine the system function, H(s), of the system with the input x(t)
and output y(t).

VU)
A C

Prob. 10.8

10-9 The consistency problems associated with models discussed in Section 10.5
resulted from idealizations made without a concomitant analysis of their
implications relative to the questions to be asked of the model. In fact, we saw
that certain questions, although grammatically meaningful, are logically
inconsistent and thus could not be asked. I call these “meaningless ques-
tions.” Analyze the following two questions to determine whether they are
meaningless and, if so, why.
(a) What happens if the irresistible force meets the immovable object?
(b) Is there a sound generated if a tree falls in a forest and no one hears it?
A PRIMER ON COMPLEX NUMBERS
AND ALGEBRA

A.l INTRODUCTION

+
A complex number is a quantity z = x j y in which x and y are real numbers and
j = a. The number x is called the real part of z, and the real number y is called
the imaginary part of z. We often express this as x = Re(z} and y = Im(z}. It is
important to note that Im(z) is a real number. Observe that z is a real number if
y = 0. Consequently, real numbers are special cases of complex numbers in which
the imaginary part is equal to zero. If x = 0, then the complex number zis said to be
an imaginary number. Two complex numbers z1 and z2 are defined to be equal only if
Re(zl}= Re(z,} and also Im(z,} = Im(z2).
Complex numbers were developed because many polynomial equations of the
form
X" + an-lX"-l + . . . + a l x + a. = o (A- 1)
do not have solutions if the solutions are restricted to being real numbers. The
+
simplest example is the quadratic equation x2 1 = 0. This equation does not
have a real solution. However, if complex numbers are allowed, then the equation
+
z2 1 = 0 has two solutions, z =j and z = -j. Complex numbers and complex
algebra have had a long development by many individuals over the centuries. With
their use, it turns out that a polynomial equation of the form
9 + c,-19-l + . . . + + co = 0
CIZ ('4-2)
in which the coefficients co,cl, . . . , cnP1are complex numbers always has a solu-
tion. This surprising result is called the fundamental theorem of algebra.' Note that

' This theorem was first proved by Gauss in 1799 as part of his doctoral thesis
353
354 A PRIMER ON COMPLEX NUMBERS AND ALGEBRA

this theorem immediately implies that Eq. (A-2) has exactly n solutions because, by
the theorem, it must have at least one solution. Let the solution be z = zl. This
solution can be factored out as

The resulting polynomial of degree (n - 1) must have at least one solution in


accordance with the fundamental theorem of algebra. This solution can be factored
out to obtain a polynomial of degree (n - 2) to which the fundamental theorem of
algebra can be applied again. This process can be continued until a constant is
obtained, which is a polynomial of degree zero. The constant is equal to one for
our case because we have chosen the coefficient of the highest power of z to be equal
to one. At this point exactly n roots of the nth-degree polynomial have been
obtained. By repeated use of the fundamental theorem of algebra in this manner,
we have shown that every polynomial of degree n has exactly n roots. This is the
reason the theorem is called the fundamental theorem of algebra since by it, we have
shown that in algebra, the complex numbers are sufficiently general so that a number
system more general than complex numbers is not required.

A.2 THE COMPLEX PLANE

Because two numbers, x and y, are required to represent a complex number,


z = x +jy, we can consider a complex number to be a point, P, in a plane as
shown in Fig. A-1. This type of representation is called an Argand diagram,* and
the plane used in this fashion is called the complex plane.
Another equivalent way of representing the complex number z is as the vector
+
OP from the origin to the point P as shown in Fig. A- 1. The length of the vector
+
OP is called the absolute value or magnitude of the complex num3r z and is
denoted by IzJ. The angle from the positive real axis to the vector OP is called
either the argument of z and denoted by Arg{z} or the angle of z and denoted by L z.
In the diagram, IzI = r and L z = 8. The quantities r and 8 are the polar coordinates
of the point with the rectangular coordinates x and y. From the diagram, we note that
the conversion from the polar representation to the rectangular representation is

x=rcos8 and y = r s i n O (A-4)

'Named for the French mathematician J. R. Argand, who published an essay on the geometric
representation of complex numbers in 1806. However, Gauss had already discussed this representation
in 1799, and the Norwegian mathematician Casper Wessel published a discussion of it in 1797.
A.2 THE COMPLEX PLANE 355

Fig. A-1 The complex plane.

so that the complex number z = x +jy also can be expressed in the trigonometric
form

z = r[cos 8 + j sin 81 (-4-5)

To convert from the polar representation to the rectangular representation, we restrict


the angle to its principal value, which is -n < 8 5 n. Then

I -n+ tan-le) if x < 0 andy < 0

and with the use of the Pythagorean theorem for right triangles we obtain

The representation of a real number, x, in rectangular form is z = x +jO. In polar


form, the representation of the real number, x, is

0 ifx>O
r = 1x1 and 8= (A-8)
71 ifx<O

Two complex numbers that differ only in the sign of their imaginary parts are said
to be conjugates. Thus the conjugate of z = x +jy is z* = x -jy. The conjugate of z
is denoted by z*. From Eqs. (A-6) and (A-7), we have

Iz*I = IzI and L z* = - L z (A-9)

Note that the conjugate of any expression involving complex numbers can be
obtained by replacing every j in the expression with -j. Observe that z is a real
number if and only if z = z*.
356 A PRIMER ON COMPLEX NUMBERS AND ALGEBRA

A.3 THE EXPONENTIAL

The exponential can be defined in terms of its power series

O0 u" u2 u3 u4
eu=&= l+u+-+-+-+... (A-10)
n=O 2! 3! 4!

which converges for all values of u. For the case in which u =j8, we obtain

in which use was made of the fact that j2= - 1, j3=j2j= -j, j4=j2j2 = 1, and so
on. The series in the fist parentheses is the power series expansion of cos 0, and the
series in the second parentheses is the power series expansion of sine, so that we
have obtained the important Euler formula3

e/' = cose +jsinO (A- 12)

and its obvious companion


,-j8 -
- cos(-8) +jsin(-8)
(A- 13)
= cos6 -jsi ne

Thus eJ8 and e-'' are conjugates. It immediately follows from Eqs. (A-12) and
(A- 13) that
1 .
cos e = -[el8
2
+ (A- 14)

and

sin e = - 1 [e/'e - e-''] (A- 15)


2j
With the use of the Euler formula, Eq. (A-12), the trigonometric form of a complex
number, Eq. (A-5), can be expressed in the concise polar form
z = reJ'8 (A- 16a)
or, equivalently,
z = lZl$('Z) (A- 16b)

Named for the Swiss mathematician Leonhard Euler (1707-1783).


A.4 COMPLEX ALGEBRA 357

A.4 COMPLEX ALGEBRA

The algebraic operations with complex numbers are defined with the same rules used
for real numbers. This ensures that the values obtained with any algebraic operation
with a complex number z results, in the special case z = x +jO, in the same value
that would be obtained when using the real number x. Thus, if zI = x, jy, and +
+
z2 = x2 jy2, the sum is defined as

and the difference is defined as

Note that for the special case in which z2 = zT we have

z, + ZT = 2x, = 2Re(z, ] and z , - zr = 2jyl = 2jIm(zl]

from which we have

1
Re(z, 1 = [zI + zT] (A- 19)

and

1
Im(z,) = -[zl - zT] (A-20)
2j

The sum of two complex numbers can be viewed geometrically as shown in


Fig. A-2. The addition is obtained geometrically by forming the parallelogram
shown.
+
-
Because the length of opposite sides of a parallelogram are equal and parallel, we
note that the length and angle of the vector 0 . z, is equal to the length and the angle
+
+
- +
of the vector z2 . (z, z2). Similarly for the vectors 0 . z2 and z1 . (z, z2).From
+
our discussion relative to Fig. A-1, we have that IzI I is equal to the length of 0 . z,,
1z21 is equal to the length of 0 .z2, and Iz, +z21 is equal to the length of

Fig. A-2 The addition of two complex numbers.


-
358 A PRIMER ON COMPLEX NUMBERS AND ALGEBRA

0 . (zl + z2). Now, because the length of any side of a triangle must be equal to or
less than the sum of the lengths of its other two sides, we have from our geometric
view of addition that

Iz1 +z2I i Iz11 + 1221 (A-2 1)

This inequality is called the triangle inequazity. This can be extended to the sum of
three complex numbers as follows:

This can be extended to the sum of n terms by continuing as above to obtain the
triangle inequality for the sum of n terms:

I n I n
(A-23)

The product of two complex numbers is

(A-24)

which is obtained by using the fact that j2= -1. A nicer form for the product is
obtained by using the polar form for the complex numbers:

(A-25)

From this result we have

lZlZ21 = r 1 r 2 = lZlIlZ21 (A-26)

and

Lz,z, = 8, + 82 = L Z l + Lz, (A-27)

That is, the product of the magnitude of two complex numbers is equal to the
product of their magnitudes, and the angle of the product of two complex numbers
A.4 COMPLEX ALGEBRA 359

is equal to the sum of their angles. Note that for the special case in which z2 = z?,
we obtain

Z,ZT = r l r l = I Z , l2 = 4+$ (A-2 8)

Thus the square of the magnitude of any complex number can be obtained by
multiplying that complex number by its conjugate. Remember that the conjugate
of any expression involving complex numbers is easily obtained by replacing every j
in that expression by -j.
Another case of interest is the case in which z2 =j . For this case it is easily seen
from its complex plane representation, Fig. A-1, that Ij ( = 1 and L j = n/2. Thus its
polar form representation is

Consequently,

(A-30)

Thus we note that multiplication by j corresponds, in the complex plane, to a simple


rotation of the vector for zl in the counterclockwise direction by n/2 rad (which is
90').
For z2 # 0, the division of z , by z2 is

(A-3 1)

This can be expressed in rectangular form by multiplying the numerator and denomi-
nator by the conjugate of z2 to obtain

(A-32)

A nicer form for the ratio of two complex numbers is obtained by expressing them in
polar form:

From this result we have that

(A-34)
360 A PRIMER ON COMPLEX NUMBERS AND ALGEBRA

and

L -Z1= e 1 - 82 = L z , - Lz, (A-35)


z2

That is, the magnitude of the ratio of two complex numbers is equal to the ratio of
their magnitudes, and the angle of the ratio of two complex numbers is equal to the
angle of the numerator minus the angle of the denominator.

A.5 POWERS OF COMPLEX NUMBERS

Let n be a positive integer. Then, as in the case of real numbers, z" is the nth power
of z. That is, 9 is the product of z by itself n times. Thus

(A-36)

and

2 = (x +jy)3 = (x3 - 3xy2) +j(32y - y3) (A-37)

The polar form of z" results in some important relations. In polar form,

z" = (re@)"= (A-38)

Thus we observe that

1
31= lzl" and Lz" = n(Lz) (A-39)

Also by expressing Eq. (A-38) in trigonometric form, we obtain for the special case
in which Y = 1

(cos 8 + j sin 0)" = cos n e +j sin n0 (A-40)

This relation is known as DeMoivre's t h e ~ r e mMany


. ~ useful trigonometric identities
can be obtained from this result. For example, we obtain for n = 2

+ e
cos2 8 - sin2 6 j 2 cos 8 sin = cos 28 +j sin 28 (A-41)

4Named for the French mathematician, Abraham DeMoivre (1667-1754). An equivalent form had been
obtained earlier by the English mathematician Roger Cotes (1682-1716).
A.6 ROOTS OF COMPLEX NUMBERS 361

A single equation such as this is, in reality, a pair of equations because two complex
numbers are equal only if their real parts are equal and also their imaginary parts are
equal. We thus have from Eq. (A-41)

cos 20 = cos2 0 - sin2 0 and sin 20 = 2 cos 0 sin 0 (A-42)

These are two half-angle formulas that are contained in most trigonometric tables. In
this manner we can obtain, for each value of n , an expression for cosn8 and for
sin n0 in terms of powers of only sin 0 and cos 8.
For the same algebraic reasons as in the case of real numbers, we define

z0 = 1 (A-43)

A.6 ROOTS OF COMPLEX NUMBERS

Let n be a positive integer; then the root, zlln, is a number that, raised to the nth
power, is equal to the number z. The polar form will be used for this determination.
First observe that if k is an integer then

= cos 2nk +j sin 2nk = 1 +j 0 (A-44)

Consequently,

(A-45)

and a complex number can be written in the equivalent form

where k is an integer. With this form of expression for the complex number z, we
have

Zl/n - ,.l/neJ(1l+2xk)/n fork = 0, k l , f 2 , f 3 , . . . (A-47)

-
- [cos (O+n2n,> +jsin (O+n2nk)]
~ ~ (A-48)

This results in n distinct values of z1ln for k = 0, 1 , 2 , . . . , ( n - 1). Other integer


values of k result in values that are one of these n values. Consequently, there are
362 A PRIMER ON COMPLEX NUMBERS AND ALGEBRA

. example, consider the case for which z = 1 . For


exactly n distinct values for ~ ' 1 " For
this case, r = 1 and 8 = 0, so that the n roots of 1 are

(A-49)

for k = 0, 1, . . . ,(n - 1). These values are the n distinct solutions of the equation
z" = 1 for which there must be n distinct solutions in accordance with the funda-
mental theorem of algebra discussed in Section A . l . From the polar form given by
Eq. (A-47), we note that, in the complex plane, the n roots of z are equally spaced on
a circle of radius r'l". The spacing is 2n/n rad (360"ln) with the root for k = 0 at an
angle of e / n rad (for e expressed in radians).
With the use of Eq. (A-48), the rational power of a complex number now can be
defined as

(A-50)

The last form is obtained using DeMoivre's theorem, Eq. (A-40).


APPENDIX B

ENERGY DISTRIBUTION IN TRANSIENT


FUNCTIONS

Let y(t) be the response of an LTI system with the input x(t), unit-impulse response
h(t), and system function H(s). As discussed in Section 5.9, the total energy ofy(t) is

E= 100

-m
k(t)l2dt

We assume Iy(t)l < 00 and that E, the total energy ofy(t), is finite. Note that this
implies that

lim y(t) = 0
t-rfcc
(B-2)

Define the partial energy of y(t) as

03-31

The partial energy, E(T), is seen to be the energy of y(t) up to the time t = T .
Now let there be a zero of H ( s ) at s = z so that we can express H(s) as

H ( s ) = (S - z)G(s) 03-4)

In this appendix, we determine the effect of moving the zero parallel to the c-r axis
upon the partial energy of the system output.
The Laplace transform of y(t) is

Y(s) = H(syrl(s) = (S - z)G(s)X(S) 03-5)


363
364 ENERGY DISTRIBUTION IN TRANSIENT FUNCTIONS

The RAC of y(t) lies in the overlap of the RACs of h(t) and x(t). For convenience,
define

so that we can express Y ( s ) as

Y(s) = (s - z)V(s) 03-7)

This equation can be expressed in the time domain as

y(t) = u’(t) - zu(t) (B-8)

Observe that

lim u(t) = 0
t-tfcc

as a consequence of Eqs. (B-2) and (B-8). Using Eq. (B-8), the square of the
magnitude of y(t) is

lY(t)l2 = Y(tlY*(t)
= Id(t)I2 + lz121~(t)12- z[~(t)][~’(t)]*
- z*[~*(t)[d(t)] (B-10)
= lu’(t)I2 + I Z I ~ ~ U- (2Re{z[u(t)][u’(t)]*)
~)~~

With the zero at z = z l , call the output y l ( t ) ;and with the zero at z = z,, call the
output y2(t).We shall determine the difference of the partial energies of yl(t) and
y2(t)to determine the effect of moving the zero upon the partial energy of the system
output. With the zero at z = zI, we have from Eq. (B-10)

and with the zero at z = z2 we have

Thus their difference is

If the zero is moved parallel to the a axis, then z1 - z2 = a l - a, and


1z112- Iz2l2 = a: - a; so that, from Eq. (B-13),
APPENDIX B 365

Now observe that

d d
-
dt
lu(t)I2 = - u(t)u*(t) = [u(t)l[u’(t)l*
dt
+ [u’(t)][u(t)]*= 2Re([u(t)][u’(t)]*) (B-15)

so that Eq. (B-14) can be expressed as

Thus the difference of the partial energies of yl ( t ) and y2(t)is

E l ( T ) - E 2 ( T )= [o: - ai] 1T

--bo
lu(t)I2 dt
(B- 17)
- (al- a2)
--bo
ddt lu(t)I2 dt

This equation can be expressed as

AE(T) = E l ( T ) - 4 ( T ) = [a: - O;IA - (01 - 02)B (B- 18)

where A is the partial energy of u(t):

A= 1 T

--bo
lu(t)I2 dt (B- 19a)

Because u(--oo) = 0 we have from Eq. (B-9)

B=~u(T)[~ (B- 19b)

For a given value of ol, the graph of M(T)versus o2 is a parabola that crosses the
o2 axis at o2 = oI and at o2 = B/A - ol. The graph of AE(T) is positive between
the two zero crossings. The maximum of M(T)occurs halfway between the two
zero crossings at o2 = B/2A 2 0 at which &(T) = A[B/2A - all2.
Thus we observe that E2(T), the partial energy of y2(t), is a minimum for
o2 = B/2A. Now the total energy of y2(t) is E2(00). For T = 00 we have from
Eqs. (B-9) and (B-19b) that B = 0, so that the total energy of y2(t) is a minimum
for o2 = 0. This result implies that if the s-plane zeros of a transform, Y(s), are
moved parallel to the o axis, then, of all the corresponding transient functions, the
transform of the one with the smallest total energy has every one of its zeros on the
Q axis.
366 ENERGY DISTRIBUTION IN TRANSIENT FUNCTIONS

We now consider the special case for which the RAC of y(t) includes the w axis
and

6, = -61 >0 (B-20a)

For this case, IY,(jw)l = IY2(jo)l and thus the energy-density spectrum of yl(t) ,
IYI(jw)l2, is identical to that 0fy2(t), I Y2(jw)I2.Their only difference is that the zero
of Y2(s)is in the right-half plane at z = 0, +jo2while that of Yl (s) is in the left-half
plane at z = -02 +jw,. From Eq. (B-l8), the difference of their partial energies is

The difference is not negative because, from Eq. (B-20a), o1 < 0. Thus we observe
that E,(T) 2 E2(T),so that, for any value of T , the partial energy ofy2(t) is less than
yl(t). Because the total energy of both functions is the same, we can say that the
energy ofy2(t) has been delayed relative to that ofy,(t). This result also implies that,
of all transient functions with the same energy density spectrum, the transform of the
one with the smallest partial energy has all its zeros in the right half of the s plane
and the one with the greatest partial energy has all its zeros in the left half of the s
plane. Because the gain of an all-pass system is one at all frequencies, the energy
density spectrum of its input, x(t), and output, y(t), are identical. Thus, in general,
the partial energy of the all-pass input, E,.( T ) , is greater than that of its output, E,( T ) ,
for all values of T , so that the output of an all-pass system is a distorted version of a
delay of its input.
Index

Index Terms Links

A number in parenthesis refers to a problem number.

A
Absolute convergence, region of (RAC) 178
Absolute value 13 354
Algebra, commutative 96 (2)
complex 357
fundamental theorem of 353
Algorithm, Routh-Hurwitz 317
All-pass filter 278
All-pass system function 282
Amplifier, audio 301 (11)
ideal 9 49
intermediate frequency 298
Amplitude density 125
Amplitude modulation 148
Angle of a complex number 354
Approximation, Padé 244
piece-wise constant 41 58 (14)
philosophy of 286
Argand diagram 354
Argand J.R. 354
Argument of a complex number 354
Asymptotic behavior, of Fourier transforms 151
Asymptotic stability 85 348

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Auxiliary polynomial 324

B
Bandpass filter, Butterworth 298
Bandpass system 286
Bandreject filter 290
Bandwidth 122 152 288
BIBO stability 85
Bilateral Laplace transform 177
Block diagrams 23
determination of 328
equivalent 331
reduction 331
Bounded waveform 85
Butterworth filter design 294
bandpass 298
lowpass 291

C
Cascade connection (see tandem connection)
Causal system, RAC of system function 190
Causality 81
of tandem connected systems 100 (22) 118
of passive systems 247
Causality, s-plane view of 237
Characteristic function 105
value 105
vector 105
Circuit, linear 36
time varying 8

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Commutative algebra 96 (2)


Commutative property of LTI systems 66
Complex algebra 357
Complex number 353
absolute value of 354
angle of 354
argument of 354
conjugate of 355
magnitude of 354
trigonometric form of 355
powers of 360
roots of 361
Complex plane 354
Conjugate 355
Connection, tandem 63
Constraints, transfer function 161
Continuous systems 42 90
Control, robust 350
Controllable subspace 349
Convolution integral 41 43
asterisk notation for 65
commutative property of 66
Convolution property of Laplace transform 196
of Fourier transforms 144 146
Convolution 48 77
Co-ordinates, polar 354
rectangular 354

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

D
DC 110
DC response 130 (14)
Decomposition of system function 297
Decrement, logarithmic 301 (12)
Definition, operational 5
Delay system 18 283
approximation 244
DeMoivre’s theorem 360
DeMoivre, Abraham 360
Density, amplitude 125
mass 124
Diagram, Argand 354
Differential equation, fundamental
solution of 80
homogeneous solution of 80
Differentiator, ideal 90
Discontinuous function 47
derivative of 76
Distributed parameter system 242
Distribution theory 49
Dominant poles and zeros 290
Doublet 59 (16)

E
Echoing, model of 22
Eigenvalue 105
Eigenvector 105
Electrostatics 94

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Energy density spectrum 155


Energy theorem 155
Energy, partial 283
Equation, integral 236
Equivalent block diagrams 331
Error, integral-square 157
Euler’s formula 112 356
Euler, Leonhard 356
Even function 115 136
Expansion, partial fraction 216
Exponential 356

F
Faltung 53
Feedback system 24 307
pole placement in 313
poles and zeros of 317
stability of 326
system function of 310
Filter, all-pass 278
Bandreject 290
Butterworth 291
Chebychev 299
Elliptic 299
ideal low-pass 155
low-pass 113 122
notch 290
one-pole high-pass 270
one-pole low-pass 267
Fourier analysis 118 133

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Fourier series 27
Fourier transform 128 134
transform pair 134
properties 139 172 (13)
Fourier, Joseph 133
Free will 81 169
Frequency differentiation property, of
Fourier transforms 172 (13)
of Laplace transform 200
Frequency domain 110
Frequency-shift property, of Fourier
transforms 144
of Laplace transforms 195
Full-wave rectifier 13
Function 8 17 19 46 343
Function L1 119
Functions, admittance 249
bounded variation 127
characteristic 106
discontinuity of 47
even 115 136
Green’s 95
impedance 248
minimum phase 241 250
odd 136
positive real (PR) 251
rational 216
rectangle 18
signum 14
system 177

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Functions, admittance (Cont.)


transfer 178
unit-step 10
weighting 176
Functional 343
Fundamental period 10
Fundamental theorem of algebra 353

G
Gain 15
dc 110 130 (14)
of tandem connection 121
s-plane view of 265
system restriction of 167
system 113
Gain and phase-shift, geometric view of 267
Gain constant 266
Gauss, Karl Friedrich 353
Generalized superposition 35
Ghosts, TV 314
Green’s function 95

H
Half-power bandwidth 288
Half-wave rectifier 9
Hard limiter 14
High-pass filter, one pole 270
Hilbert transform 161 166
Hilbert, David 161
Homogeneous property 35

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Hume, David 81
Hurwitz, A. 318

I
Ideal amplifier 9 49
Ideal differentiator 90
Ideal integrator 18 50
Impedance function 248
Impulse 44 341
Inequality, triangle 358
Initial conditions 252
Input vector 345
Integral equation 236
Integral-square error 157
Integral, approximation of 44
evaluation of 160
Integral, Riemann 124
Integration by parts 198
Integrator, ideal 50
Interconnection of LTI systems 305
Intermediate frequency amplifier 298
Interval property of the RAC 188
Invalid models 336
Inverse mapping 125
Inverse system 4 282 314

L
L1 function 119
Laplace transform, bilateral 177
inverse 209

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Laplace transform, bilateral (Cont.)


of system response 231
properties of 192
table of transform pairs 224
table of transform properties 224
transform pair 211
unilateral 252
Laplace, Pierre Simon 175
Left-hand limit 43
Lesbesgue, Henri Leon 119
Limit concept, physical meaning of 342
Limit 43
left-hand 43
right-hand 43
Limiter, hard 14
soft 15
Linear circuit 36
Linear system 36
Linear systems 33
superposition property of 35
tandem 64
Linear vector space 105 349
Linearity property, of Laplace transform 191
of Fourier transforms 140
of inverse Laplace transform 213
Logarithmic decrement 301 (12)
Low-pass filter 113
one pole 267
Butterworth 291
ideal 168

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

LTI systems 37
LTI systems
BIBO stability criterion for 86
causality 82
characteristic function of 107
characteristic value of 107
commutative property of 66
continuous 90
fundamental property of 38
noncausal system approximation 102 (31)
sinusoidal response of 109
transfer function of 107
LTV systems 37
Lumped parameter system 241
Lyapunov, Alexander M. 85

M
Magnitude of a complex number 354
Mapping, many-to-one 3 8
one-to-one 4
Mass density 124
Matrix theory 105
Matrix, state transition 345
Meaningless questions 352 (9)
Minimum-phase functions 241 250
Minimum-phase system 281
Model consistency 336
Models ix
of echoing 22
relation between theoretical and physical ix 5 336

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Modulation theorem 148


Multipath 314

N
No-memory systems 8
Notch filter 290
Numbers, complex 353

O
Observable subspace 350
Odd functions 136
Open-circuit transient response 248
Operational definition 5
Operator 3

P
Padé approximation 244
Paley-Wiener criterion 167
Paradoxes 341
Parallel connection 305
Parseval relation 155
Parseval-deschênes, Marc-Antoine 154
Partial energy 283
distribution in transient functions 363
Partial fraction expansion 216
Passive systems 245
Passivity 246
Period, fundamental 10
Periodic waveform 10
Phase constant 266

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Phase-shift 11
of a delay system 283
system 113
of an all-pass system 282
of tandem connection 121
s-plane view of 265
Phasor 106
Physical system, modeling philosophy 5
Plane, complex 354
Poisson equation 94
Polar co-ordinates 354
Polar form 112
Pole placement in feedback system 313
Pole-zero pair 272
Poles and zeros, dominant 290
Poles of the system function 185
Polynomial, roots of 354
formulas for roots 317
roots determination by Routh-Hurwitz
algorithm 325
roots, first-order 218
roots, second-order 218
roots, simple 218
Positive real (PR) functions 251
Potential integral 93
Power density spectrum 169
Prediction 169
Proper rational function 216
Pulse compression 26
Pulse, normal 76

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Pythagorean theorem 355

Q
Quality factor (Q) of a filter 288
Questions, meaningless 352 (9)

R
RAC (region of absolute convergence) 178
RAC, properties of 187 204 (9)
Rational functions 216
proper 216
strictly proper 216
Reality x
Rectangle function 18
Rectangular co-ordinates 354
Rectifier, full-wave 13
half-wave 9
Response, unit-impulse 44
unit-step 21
Riemann integration 124
Riemann-Lebesgue lemma 131 (20)
Riemann, George Friedrich Bernhard 124
Right-hand limit 43
Robust control 350
Root locus 312
Routh-Hurwitz, algorithm 319
root determination by 325
auxiliary polynomial 324
Routh, E.J. 318

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

S
Sampler 56 (5)
Scaling property, of Laplace transform 193
of Fourier transforms 142 152
Series, Fourier 27 (7)
Shift invariance 94
Short-circuit transient response 249
Sifting property of the impulse 48
Signum function 14
Sinusoidal waveform 10
phase-shift of 11
time-shift of 11
Soft limiter 15
Spectrum, energy density 155
power density 169
s-plane 180
view of gain and phase-shift 265
Square-law device 34
Stability, of feedback system 326
of tandem connection 100 (22) 119
asymptotic 85 348
BIBO 85
i.s.l. 85
of feedback systems 311
of tandem connected systems 100
s-plane view of 238
RAC of system function 191
State, of a system 344
transition matrix 345

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

State, of a system (Cont.)


variable 344
vector 345
State space analysis 343
Steady-state response 117 259
Subspace, controllable 349
observable 350
Subsystem 20
Subterfuge for unstable systems 92
Summer 20 23
Superposition property 35
Superposition generalized 73
Symmetry property of Fourier transforms 140
Synthesis, system 283
System ix 1
System analysis 4
System classification 5
System connection, feedback 307
parallel 305
tandem 306
System function 82 177 231 243
algebraic determination of 291
all-pass 282
bandpass 286
decomposition of 297
RAC for causal system 190
RAC for stable system 191
System gain 113
System inverse 4 240 282
System memory 84

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

System output 23
System realization, approximation 206 (19)
System response, steady-state 117 259
transient 117 259
unit-step 69
zero initial condition 256
zero input 257
System, autonomous 1
continuous 42
delay 18 283
distributed parameter 242
feedback 24 308
ideal integrator 18
homogeneous 35
inverse 314
linear 33 36
LTI 37
LTV 37
lumped parameter 241
memory 17
minimum phase 281
no-memory 8
non-autonomous 1
passive 245 250
phase-shift 113
square-law 34
state of 344
synthesis 283
two-terminal 246

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

T
Table of, Laplace transform pairs 224
Laplace transform properties 224
Tandem connection 63 118 306
Tandem systems, causality of 118
gain of 121
linear 64
phase-shift of 121
stability of 119
time invariance (TI) of 64
transfer function of 120
Time differentiation property, of Laplace
transform 198
of Fourier transforms 145
Time domain 110
Time invariance (TI) 5 8
of tandem systems 64
Time- and Frequency-shift property
of Fourier transforms 147
Time-differentiation property of Fourier
transforms 149
Time-shift property, of Laplace transform 194
of the RAC 187
of Fourier transforms 143
Transfer characteristic 8
Transfer function 107 178
constraints of 161
of tandem connection 120
relation of real and imaginary parts 163

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Transform, Fourier 134


Hilbert 161 166
Laplace 175
Transformation, lowpass to bandpass 290
Transient functions, energy distribution in 363
Transient response 117 259
open-circuit 248
short-circuit 249
Triangle inequality 358
Two-terminal systems 246

U
Unilateral Laplace transform 252
Unit point-charge 95
Unit-impulse response 44
Unit-impulse 44 97 (7)
defining property of 74
Fourier transform of 136
sifting property of 45
Unit-step function 10
Unit-step response 21 69
Unstable systems, subterfuge for 92

V
Value, characteristic 105
Vector, characteristic 105

W
Waveform, representation 123
bounded 85

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Waveform, representation (Cont.)


periodic 10
sinusoidal 10
Weighting function 176
Wiener, Norbert 133

Z
Zero initial condition response 256
Zero input response 257
Zeros of the system function 185

This page has been reformatted by Knovel to provide easier navigation.

Anda mungkin juga menyukai