Anda di halaman 1dari 12



1.1. Example: A Dice-Rolling Problem. In board games like Monopoly, Clue, Parcheesi, and so

on, players take turns rolling a pair of dice and then moving tokens on a board according to rules that depend on the dice rolls. The rules of these games are all somewhat complicated, but in most of the games, in most circumstances, the number of spaces a player moves his/her token is the sum of the numbers showing on the dice.

Consider a simplified board game in which, on each turn, a player rolls a single fair die and then moves his/her token ahead X spaces, where X is the number showing on the die. Assume

that the board has K spaces arranged in a cycle, so that after a token is moved ahead a total of K

, are arranged around the cycle, with label 0 assigned to the space where the tokens are placed at the beginning of the game (in Monopoly, this would be the space marked “Go”). Then the movement of a player’s token may be described as follows: after n turns the token will be on the space labelled S n mod K , where

K 1 in the order they

spaces it returns to its original position. Assign the spaces labels 0, 1, 2,


and X 1 ,X 2 ,

S n =



X i

are the dice rolls obtained on successive turns.

Problem 0: What is the probability that on the tenth time around the board a player’s token lands on the space labelled K 1?

Anyone who has played Monopoly will know that this particular space is “Boardwalk”, and that if another player owns it then it is a bad thing to land on it, especially late in the game. Of course, the answer to Problem 0 isn’t especially relevant to Monopoly, because the rules of movement aren’t the simple rules we have imposed above; in fact, the problem is really more interesting in Monopoly, because there it turns out that the probability of landing on Boardwalk isn’t quite the same as that of landing on (say) Park Place (space K 3). But we have to start somewhere, so Problem 0 is what we’ll consider. Here is a more general problem, formulated in a more convenient way:

Problem 1: Fix an integer x 0. What is the probability u x that S n = x for some n?

Observe that when x = 10K 1, this reduces to Problem 0.

There is a related issue in Monopoly that is of some interest: when you pass Go, there is a block of spaces (Baltic and Mediterranean Avenues) that you might want to avoid if you don’t own them. So what is the probability that on the tenth trip around the board you manage to jump over them altogether (that is, the chance that you don’t land on any of the spaces labelled 1,2,3)? Here is a more general formulation:




Problem 2: Fix t 0, and define the first-passage and overshoot random variables by

τ(t ) = min{n 0 : S



What is the probability distribution of R(t )?


> t }


R(t ) =S

τ(t )


1.2. Ordinary Renewal Processes. Problems 1 and 2 above are prototypical problems in renewal

theory. In general, a renewal process (or more precisely, an ordinary renewal process) is the in-

creasing sequence of random nonnegative numbers 0,S 1 ,S 2 ,

, called renewals, or sometimes occurrences. With each renewal process is associated a renewal counting process N (t ) that tracks the total number of renewals (not including the initial occur- rence) to date: the random variable N (t ) is defined by


as in equation (1). The individual terms S n of this sequence are

tive random variables X 1 ,X 2 ,

gotten by adding i.i.d. nonnega-

N (t ) = max{n : S n t} = τ(t ) 1

where τ(t ) is the first passage time defined by (2) above. NOTE: Some authors, including ROSS, use the term renewal process to denote the counting process N (t ).

In many instances (but not the dice-rolling example) the renewals have a natural interpreta- tion as the times at which some irregularly occurring event take place. (For example, imagine

that X 1 ,X 2 ,

light socket. If the first component is inserted at time 0, then the renewals S n are the times at which components must be replaced.) Thus, it is traditional in the subject to refer to the random variables X i as interoccurrence times, and their common distribution as the interoccurrence time distribution. The random variables

are the lifetimes of replaceable components in a system, such as light bulbs in a


A(t ) = t S

R(t ) =S τ(t ) t ,

N(t )




) =S τ(t ) S N(t )

are usually called the age, residual lifetime, and total lifetime random variables (although in cer- tain applications it is also common to refer to R(t ) as the overshoot). Note that L(t ) = A t + R(t ).

1.3. Delayed Renewal Processes. It isn’t always reasonable to insist that the first renewal occurs

at time S 0 = 0. For instance, in applications where the occurences are at times when a com-

ponent of a system must be replaced, one might well be interested in situations where there is already a working component in place at time 0. For this reason, we define a delayed renewal

process to be a sequence S 0 ,S 1 ,S 2 ,



S n =S 0 +


j =1

X j ,

are positive and i.i.d., as in an ordinary renewal pro-

cess, and (ii) the initial delay S 0 0 is independent of the interoccurence times X i . Notice that the distribution of the initial delay random variable S 0 is not required to be the same as that of the interoccurence time random variables X i . The age, residual lifetime, and total lifetime random variables for a delayed renewal process are defined in exactly the same manner as for ordinary renewal processes (equations (5) above).

and (i) the interoccurence times X 1 ,X 2 ,



1.4. Poisson and Bernoulli Processes. Two special cases are especially important. When the

interoccurrence time distribution is the exponential distribution with mean 1, then the corre- sponding renewal process is the Poisson process with intensity λ. When the interoccurrence time distribution is the geometric distribution with parameter p , that is,

P{X i = k} = p(1p) k1

for k = 1, 2,


then the corresponding renewal process is the Bernoulli process with success parameter p . Many interesting problems that are quite difficult for renewal processes in general have simple solu- tions for Poisson and Bernoulli processes. For instance, in general the distribution of the count random variable N (t ) does not have a simple closed form, but for a Poisson process it has the Poisson distribution with mean λt . Similarly, the age and residual lifetime random variables A(t ) and R(t ) generally have distributions that do not depend on the interoccurrence time dis- tribution in a simple way, but for the Poisson process they are, for any fixed t , just independent exponential random variables with mean 1. These properties follow easily from the memory- less property of the exponential distribution. Renewal processes are important because in many systems the times between successive renewals do not have memoryless distributions.

Exercise: What are the distributions of A t ,R t , and L t for a Bernoulli process?

1.5. Renewal Processes: Arithmetic and Nonarithmetic. Renewal theory breaks into two par-

allel streams, one for discrete time, the other for continuous time. These are called the arithmetic and nonarithmetic cases, respectively: A renewal process is said to be nonarithmetic if the dis- tribution of the interoccurrence time random variables is not fully supported by an arithmetic

progression h, 2h, 3h,

such that the interoccurrence time distribution is fully supported by h is called the span. For simplicity, assume henceforth that in the arithmetic case the span is h = 1, and also, for delayed arithmetic renewal processes, that the initial delay random variable S 0 is integer-valued.

Otherwise, it is said to be arithmetic, and the maximal value of h > 0


2.1. Renewal Theorem: Statement. Consider now an arithmetic renewal process {S n }. Let {u m } m0

be the associated renewal measure, that is, u m is the probability that the random walk {S n } ever visits the point m . Since the random walk makes only positive steps, it can only visit a point m

at most once, so by the law of total probability,


u m =


P{S n = m}.

The cornerstone of renewal theory is the following theorem, due to FELLER, ERDÖS, & POLLARD, which asserts that the sequence u m converges to a constant.

Feller-Erdös-Pollard Theorem . Assume that the step distribution of the renewal process is not fully concentrated on any proper additive subgroup of the integers. 1 Then


u m = 1,



where µ is the mean of the step distribution.

1 Equivalently, there is no integer h 2 such that the step distribution is concentrated on the set h of integer multiples of h.



This gives a partial answer to the dice-rolling problem (Problem 1) of section 1.1: It implies that if x is large, the chance that S n = x for some n is about 1= 1/(3.5) = 2/7. Later we will see how to get much more precise numerical approximations. We’ll give two proofs of the Feller-Erdös-Pollard theorem later, one (in section 3 below) based on generating functions (this for the special case where the step distribution has finite support), and another based on purely “probabilistic” considerations. We will also see, later in the course that the Feller-ErdöS-Pollard theorem is (essentially) equivalent to Kolmogorov’s convergence theorem for positive recurrent Markov chains.

2.2. The Renewal Equation. The usefulness of the Feller-Erdös-Pollard theorem derives partly from its connection with another basic theorem called the Key Renewal Theorem (see below) which describes the asymptotic behavior of solutions to the Renewal Equation. The Renewal Equation is a convolution equation relating bounded sequences {z (m )} m0 and {b (m )} m0 of real numbers:

Renewal Equation, First Form:


z (m ) = b (m ) +



f (k)z(m k).

Here f (k ) = f k is the interoccurrence time distribution for the renewal process. There is an equivalent way of writing the Renewal Equation that is more suggestive of how it actually arises in practice. Set z (m ) = b (m ) = 0 for m < 0; then the upper limit k = m 1 in the sum in the Renewal Equation may be changed to m = without affecting its value. The Renewal Equation may now be written as follows, with X 1 representing the first interoccurrence time:

Renewal Equation, Second Form:


z (m ) = b (m ) + E z (x X 1 ).

As you will eventually come to appreciate, Renewal Equations crop up all over the place. In many circumstances, the sequence z (m ) is some scalar function of time whose behavior is of some interest; the renewal equation is gotten by conditioning on the value of the first interoc- currence time. In carrying out this conditioning, it is crucial to realize that the sequence S ,S 2 , defined by


S n =S n X 1 =


j =2

X j

is itself a renewal process, independent of X 1 , and with the same interoccurrence time distribu- tion f (x ).

Example 1: The Renewal Measure. The renewal measure is the sequence u (k ) = probability that

there is a renewal at time k . If k = 0, then u (k ) = 1, because in an ordinary renewal process there is always a renewal at time 0. Suppose that k 1: then in order that there be a renewal at k , on

has an occurrence

the event that X 1 = m , it must be the case that the renewal process S ,S 2 , at time k m . Thus, the renewal measure satisfies the Renewal Equation


u (k ) = δ 0 (k) + Eu (k S 1 ),




where δ 0 (·) is the Kronecker delta function. Here is a formal proof:

u (k) =








P{S n = k}

P{S 0 = k} +

P{S n = k}

δ 0 (k) +


P{X 1 = x and S n = k x}

δ 0 (k) +

δ 0 (k)

δ 0 (k)

δ 0 (k)




n=1 x=1

∞ ∞

P{X 1 = x}P{S n = k x}

n=1 x=1



P{S n = k x}




(x)u (k x)


Eu (k X 1 )

The change in the order of summation on the fifth line is justified because all the terms of the double sum are nonnegative.

Example 2: The Total Lifetime Distribution. Let L(m ) be the total lifetime of the component in use at time m . Fix r 1, and set z (m ) = P{L(m ) = r }. Then z satisfies the Renewal Equation (10) with


EXERCISE: Derive this.

b (m ) = f (r )

for m = 0, 1, 2,

= 0

for m r


r 1

2.3. Solution of the Renewal Equation. Consider the Renewal Equation in its second form z (m ) =

b(m)+Ez(m X 1 ) where by convention z (m ) = 0 for all negative values of m . Since the function z (·) appears on the right side as well as on the left, it is possible to resubstitute on the right. This leads to a sequence of equivalent equations:

z(m) =




X 1 )


b(m) + Eb(m

X 1 )

+ Ez(m

X 1 X 2 )


b(m) + Eb(m

X 1 )

+ Eb(m

X 1 X 2 ) + Ez(m X 1 X 2 X 3 )

and so on. After m iterations, there is no further change (because S m+1 > m and z (l ) = 0 for all negative integers l ), and the right side no longer involves z . Thus, it is possible to solve for z in



terms of the sequences b and p :

z(m) =



S n )



∞ ∞

n=0 x=0

x=0 n=0

b(m x)P{S n = x}

b(m x)P{S n = x}

= b(m x)u (x).


Note that only finitely many terms in the series are nonzero, so the interchange of summations is justified. Thus, the solution to the Renewal Equation is the convolution of the sequence b (m ) with the renewal measure:


z(m) =


b(m x)u (x)

2.4. The Key Renewal Theorem. The formula (13) and the Feller-Erdös-Pollard theorem now combine to give the asymptotic behavior (as m → ∞) of the solution z .

Key Renewal Theorem (Lattice Case) . Let z (m ) be the solution to the Renewal Equation (10). If the sequence b (m ) is absolutely summable, then


z(m) = µ 1





Proof. The formula (13) may be rewritten as


z (m ) =


b(k)u (m k)

For each fixed k , the sequence u (m k ) µ 1 as m → ∞, by the Feller-Erdös-Pollard theorem.

Thus, as m → ∞, the k th term of the series (15) converges to b (k ). Moreover, because u (m k ) 1, the k th term is bounded in absolute value by |b (k )|. By hypothesis, this sequence is summable, so the Dominated Convergence Theorem implies that the series converges as m → ∞

to the right side of (14).

Example 4: Limiting Residual Lifetime Distribution. Recall that the sequence z (m ) = P{R(k ) = r } satisfies the Renewal Equation (??), which reduces to (10) with b (m ) = f (m +r ). The sequence b (m ) is summable, because µ = E X 1 < (explain this). Therefore, the Key Renewal Theorem implies that for each r = 1, 2, 3, ,


P{R(m) = r} = µ 1




f (k + r ) = µ 1 P{X 1 r}.

Notice that this is exactly the same limiting distribution as for A(m )+ 1, where A(m ) is the age of the component in use at time m (compare equation (??)).



Example 5: Limiting Total Lifetime Distribution. Recall (Example 3 above) that the sequence z (m ) = P {L(m ) = r } satisfies the Renewal Equation (10) with b (m ) defined by (12). Only finitely many terms of the sequence b (m ) are nonzero, and so the summability hypothesis of the Key Renewal Theorem is satisfied. Since k0 b (m ) = r f (r ), it follows from (14) that

Corollary 1.


P{L(m ) = r } = r f (r ).




3.1. Generating Functions. The Renewal Theorem tells us how the age and residual lifetime distributions behave at large times, and similarly the Key Renewal Theorem tells us how the solution of a renewal equation behaves at infinity. In certain cases, exact calculations can be done. These are usually done using generating functions.

Definition 1. The generating function of a sequence {a n } n0 of real (or complex) numbers is the function A(z ) defined by the power series


A(z ) =


a n z n .

Observe that for an arbitrary sequence {a n } the series (18) need not converge for all complex values of the argumetn z . In fact, for some sequences the series (18) diverges for every z except z = 0: this is the case, for instance, if a n = n n . But for many sequences of interest, there will exist a positive number R such that, for all complex numbers z such that |z | < R, the series (18) converges absolutely. In such cases, the generating function A(z ) is said to have positive radius of convergence. The generating functions in all of the problems considered in these notes will have positive radius of convergence. Notice that if the entries of the sequence a n are probabilities, that is, if 0 a n 1 for all n, then the series (18) converges absolutely for all z such that |z | < 1.

If the generating function A(z ) has positive radius of convergence then, at least in principal, all information about the sequence {a n } is encapsulated in the generating function A(z ). In particular, each coefficient a n can be recovered from the function A(z ), since n!a n is the nth derivative of A(z ) at z = 0. Other information may also be recovered from the generating func- tion: for example, if the sequence {a n } is a discrete probability density, then its mean may be obtained by evaluating A (z ) at z = 1, and all of the higher moments may be recovered from the higher derivatives of A(z ) at z = 1.

A crucially important property of generating functions is the multiplication law: The gen- erating function of the convolution of two sequences is the product of their generating func- tions. This is the basis of most uses of generating functions in random walk theory, and all of the examples considered below. The calculation that establishes this property is spelled out in section ??. Note that for probability generating functions, this fact is a consequence of the mul- tiplication law for expectations of independent random variables: If X and Y are independent, nonnegative-integer valued random variables, then



3.2. The Renewal Equation. Let { f k } k1 be a probability distribution on the positive integers

with finite mean µ = k f k , and let X 1 ,X 2 ,

tributed random variables with common discrete distribution { f k }. Define the renewal sequence

associated to the distribution { f k } to be the sequence

be a sequence of independent, identically dis-


u m = P{S

n = m for some n 0}

= P{S n = m}


where S n = X 1 + X 2 + ··· + X n (and S 0 = 0). note that u 0 = 1 A simple linear recursion for the sequence u m , called the renewal equation, may be obtained by conditioning on the first step S 1 = X 1 :


u m =



k u mk + δ 0 (m)


where δ 0 is the Kronecker delta (1 at 0; and 0 elsewhere).

The renewal equation is a particularly simple kind of recursive relation: the right side is just the convolution of the sequences f k and u m . The appearance of a convolution should always suggest the use of some kind of generating function or transform (Fourier or Laplace), because these always convert convolution to multiplication. Let’s try it: define generating functions


U(z) =


F(z) =

u m z m


f k z k .



Observe that if you multiply the renewal equation (21) by z m and sum over m then the left side becomes U (z ), so


U(z) = 1 +

= 1 +


m=0 k=1

∞ ∞

k=1 m=k

f k u mk z m

f k z k u mk z mk

= 1 + F(z)U(z).

Thus, we have a simple functional equation relating the generating functions U (z ) and F (z ). It may be solved for U (z ) in terms of F (z ):


1 U(z) = 1−F(z)
U(z) =

3.3. Partial Fraction Decompositions. Formula (25) tells us how the generating function of the

renewal sequence is related to the probability generating function of the steps X j . Extracting use- ful information from this relation is, in general, a difficult analytical task. However, in the special case where the probability distribution { f k } has finite support, the method of partial fraction de- composition provides an effective method for recovering the terms u m of the renewal sequence.



Observe that when the probability distribution { f k } has finite support, its generating function F (z ) is a polynomial, and so in this case the generating function U (z ) is a rational function. 2

The strategy behind the method of partial fraction decomposition rests on the fact that a sim- ple pole may be expanded as a geometric series: in particular, for |z | < 1,


(1 z ) 1 =


z n .

Differentiating with respect to z repeatedly gives a formula for a pole of order k + 1:


(1 z ) k1 =

n=k n


z nk .

Suppose now that we could write the generating function U (z ) as a sum of poles C /(1(z )) k+1 (such a sum is called a partial fraction decomposition). Then each of the poles could be expanded in a series of type (26) or (27), and so the coefficients of U (z ) could be obtained by adding the corresponding coefficients in the series expansions for the poles.

Example: Consider the probability distribution f 1 = f 2 = 1/2. The generating function F is given by F (z ) = (z + z 2 )/2. The problem is to obtain a partial fraction decomposition for (1 F (z )) 1 . To do this, observe that at every pole z = ζ the function 1 F ( z ) must take the value 0. Thus, we look for potential poles at the zeros of the polynomial 1 F (z ). In the case under consideration, the polynomial is quadratic, with roots ζ 1 = 1 and ζ 2 = 2. Since each of these is a simple root both poles should be simple; thus, we should try


C 1

C 2

1(z + z 2 )/2 =

1z + 1 + (z/2) .

The values of C 1 and C 2 can be gotten either by adding the fractions and seeing what works or by differentiating both sides and seeing what happens at each of the two poles. The upshot is that C 1 = 2/3 and C 2 = 1/3. Thus,


U (z ) =




1F(z) = 1z + 1 + (z/2) .

We can now read off the renewal sequence u m by expanding the two poles in geometric series:


u m =


3 +


3 (2) m .

There are several things worth noting. First, the renewal sequence u m has limit 2/3. This equals 1, where µ = 3/2 is the mean of the distribution { f k }. We should be reassured by this, because it is what the Feller-Erdös-Pollard Renewal Theorem predicts the limit should be. Sec- ond, the remainder term (1/3)(2) m decays exponentially in m . As we shall see, this is always the case for distributions { f k } with finite support. It is not always the case for arbitrary distribu- tions { f k }, however.

is the sequence a n such that a 1 = a 2 = 1 and

Problem 1. The Fibonacci sequence 1, 1, 2, 3, 5, 8, such that

a m+2 = a m + a m+1 .

(A) Find a functional equation for the generating function of the Fibonacci sequence. (B) Use the method of partial fractions to deduce a formula for the terms of the Fibonacci sequence. NOTE:

2 A rational function is the ratio of two polynomials.



Your answer should involve the so-called golden ratio (the larger root of the equation x 2 x 1 =



Step Distributions with Finite Support. Assume now that the step distribution { f k } has fi-

nite support, is nontrivial (that is, does not assign probability 1 to a single point) and is nonlattice (that is, it does not give probability 1 to a proper arithmetic progression). Then the generating function F (z ) = f k z k is a polynomial of degree at least two. By the Fundamental Theorem of Algebra, 1 F (z ) may be written as a product of linear factors:


1 F (z ) = C


j =1

(1 z j )

The coefficients ζ j in this expansion are the (possibly complex) roots of the polynomial equation F (z ) = 1. Since the coefficients f k of F (z ) are real, the roots of F (z ) = 1 come in conjugate pairs; thus, it is only necessary to find the roots in the upper half plane (that is, those with nonnegative imaginary part). In practice, it is usually necessary to solve for these roots numerically. The following proposition states that none of the roots is inside the unit circle in the complex plane.

Lemma 2. If the step distribution { f k } is nontrivial, nonlattice, and has finite support, then the polynomial 1F (z ) has a simple root at ζ 1 = 1, and all other roots ζ j satisfy the inequality |ζ j | > 1.

Proof. It is clear that ζ 1 = 1 is a root, since F ( z ) is a probability generating function. To see that ζ 1 = 1 is a simple root (that is, occurs only once in the product (30)), note that if it were a multiple root then it would have to be a root of the derivative F (z ) (since the factor (1 z ) would occur at least twice in the product (30)). If this were the case, then F (1) = 0 would be the mean of the probability distribution { f k }. But since this distribution has support {1, 2, 3 ··· }, its mean is at least 1.

In order that ζ be a root of 1 F (z ) it must be the case that F (ζ) = 1. Since F (z ) is a probability generating function, this can only happen if |ζ| ≥ 1. Thus, to complete the proof we must show that there are no roots of modulus one other than ζ = 1. Suppose, then, that ζ = e iθ is such that F (ζ) = 1, equivalently, f k e iθk = 1.

Then for every k such that f k > 0 it must be that e iθk = 1. This implies that θ is an integer multiple of 2π/k , and that this is true for every k such that f k > 0. Since the distribution { f k } is nonlattice, the greatest common divisor of the integers k such that f k > 0 is 1. Hence, θ is an

integer multiple of 2π, and so ζ = 1.

Corollary 3. If the step distribution { f k } is nontrivial, nonlattice, and has finite support, then




1F(z) = µ(1z) +


r =1



(1 z r ) k r

where µ is the mean of the distribution { f k } amd the poles ζ r are all of modulus strictly greater than 1.

Proof. The only thing that remains to be proved is that the simple pole at 1 has residue 1 . To see this, multiply both sides of equation (31) by 1 z :


1F(z) = C + (1z)


C r

(1 z r ) k r .



Now take the limit of both sides as z 1: the limit of the right side is clearly C , and the limit of

the left side is 1, because µ is the derivative of F (z ) at z = 1. Hence, C = 1.

Corollary 4. If the step distribution { f k } is nontrivial, nonlattice, and has finite support, then


and the remainder decays exponentially as m → ∞.

u m = 1,



Remark. The last corollary is a special case of the Feller-Erdös-Pollard Renewal Theorem. This theorem asserts that (8) is true under the weaker hypothesis that the step distribution is non- trivial and nonlattice. Various proofs of the Feller-Erdös-Pollard theorem are known, some of which exploit the relation (25). Partial fraction decomposition does not work in the general case, though.


Renewal theory in the nonlattice case is technically more complicated, but the main results are very similar to those in the lattice case once the obvious adjustments for continuous time (e.g., changing sums to integrals) are made. Since many of the arguments are parallel to those in the lattice case, we shall merely record the main results without proofs.

A nonlattice distribution is one which is not lattice, that is, does not attach all of its probability to an arithmetic progression h . Probability distributions with continuous densities f (x ) for

x > 0 are nonlattice, but so are lots of other distributions that do not have densities, including

some discrete ones. An example that will turn out to be interesting is the discrete distribution

with f k > 0 for infinitely many values of

k (note that if f k > 0 for only finitely many k then the distribution would be lattice). Throughout

is a renewal process with interoccurrence distribution F

that assigns mass f k to the point 2 k for k = 0, 1, 2, 3,


this section we assume that 0,S 1 ,S 2 , satisfying

Assumption 1. The interoccurrence time distribution F is supported by the positive real num- bers and is nonlattice.

Assumption 2. The interoccurrence time distribution has finite mean µ = x dF(x).

Renewal Measure: In the nonlattice case, the renewal measure is defined by its cumulative mass function


U (t ) = 1 + E N (t ) = P{S n t}


Blackwell’s Theorem . For each positive real number h > 0,


(U (t + h) U (t )) = h.



This is the natural analogue of the Feller-Erdös-Pollard Theorem for nonlattice renewal pro- cesses. As in the lattice case, the practical importance of the Blackwell Theorem stems from its connection with a Key Renewal Theorem that describes the asymptotic behavior of solutions to the Renewal Equation. The Renewal Equation in the nonlattice case is a convollution equa- tion relating two (measurable) functions z (t ) and b (t ) that are defined for all real t , and satisfy z (t ) = b (t ) = 0 for all t < 0.



Renewal Equation:


z (t ) = b (t ) + E z (t X 1 ) = b(t ) +



z(t s)dF(s).

Technical Note: The notation d F denotes integration with respect to the interoccurrence dis- tribution. If this distribution has a density f (t ) then d F (t ) = f (t )d t . Otherwise, d F must be interpreted as a Lebesgue-Stieltjes integral. If you have never heard of these, don’t worry — all the important examples in this course will either be purely discrete or will have a density.

Just as in the lattice case, the renewal equation has a unique solution z (t ) provided b (t ) is bounded. There is an explicit formula for this:


z (t ) = n=0 Eb(t S n ) =


b(t s)dU(s).

The Key Renewal Theorem for the nonlattice case differs from the corresponding theorem for the lattice case in an important technical respect. In the lattice case, it was necessary only to assume that the sequence b (m ) be absolutely summable. Thus, it is natural to expect that in the nonlattice case it should be enough to assume that the function b (t ) be L 1 , that is, absolutely integrable. However, this isn’t so! The right condition turns out to be direct Riemann integrability, which is defined as follows.

Definition 2. Let b (t ) be a real-valued function of t such that b (t ) = 0 for all t < 0. Then b (t ) is said to be directly Riemann integrable if






(n1)atna b (t ) =






(n1)atna b(t ) = b(t )dt.


Notice that the sums would be Riemann sums if not for the infinite limit of summation. If a function is directly Riemann integrable, then it is Lebesgue integrable (if you don’t know what this is, don’t worry about it), but the converse is not true. In practice, one rarely needs to deal with the condition (37), because there are relatively simple sufficient conditons that guarantees direct Riemann integrability:

Lemma 5. If b (t ) is nonnegative and nonincreasing on [0, ), and if the Lebesgue (or Riemann)


b (t ) d t < , then b (t ) is directly Riemann integrable.


Lemma 6. If b (t ) has compact support and is Riemann integrable then it is directly Riemann integrable.

The proofs are omitted but are not extremely difficult.

Key Renewal Theorem (Nonlattice Case) . Assume that the interoccurrence time distribution is nonlattice and has finite mean µ > 0. If b (t ) is directly Riemann integrable then the solution z (t ) of the renewal equation (35) satisfies


z(t ) = µ 1




b(t )dt.