1. INTRODUCTION
The lognormal random variable Y e X ~ N μ, σ is familiar to casualty actuaries, especially to those in
reinsurance. It vies with the Pareto for the description of heavy-tailed and catastrophic losses.
However, unlike the Pareto, all its moments are finite. Moreover, the formula for the lognormal
moments is rather simple: E Y n e nμ n σ
2 2
2
. So its first two moments are E Y e μ σ
2
2
and
2 2 2 2
2
E Y 2 e 2μ 2 σ E Y e σ . Hence, its variance is Var Y E Y e σ 1 , a formula so well known
2
that actuaries commonly refer to e σ 1 as the “CV squared” of the lognormal. But in recent years,
with the rise of ERM and capital modeling, actuaries have needed to model many interrelated
random variables. If these random variables are heavy-tailed, it may be apt to model them with the
1 The standard reference for the lognormal distribution is Klugman [1998, Appendix A.4.1.1]. On the subject of heavy-
tailed distributions, see Klugman [1998, §2.7.2] and Halliwell [2013].
X1
The lognormal random multivariate is y e , where x is an n×1 normal multivariate with
x
X n
n×1 mean μ and n×n variance Σ. As a realistic variance, Σ must be positive-definite, hence
invertible.2
The probability density function of the normal random vector x with mean µ and variance Σ is:3
1
1 x μ 1 x μ
f x x e 2
2 n
Therefore, f x dV 1 .
x The single integral over n represents an n-multiple integral over each
x n
t j X j
n
M x t E e tx E e j1 , where t is an n×1 vector. Partial derivatives of the moment generating
k1 k n M x t
k1 x1 k n xn
E X 1k1 X nk n e tx t 0
E X 1k1 X nk n
t 0
The lognormal moments come directly from the normal moment generating function. For example,
M x e j e k E e j e X k E Y jYk . So the normal moment generating function is the key to the
X
lognormal moments.
M x t E e tx
1
1 x μ 1 x μ tx
2 n
e 2
e dV
x n
1
1
x μ 1 x μ 2 tx
2 n
e 2
dV
x n
x μ 1 x μ 2t x x μ Σt 1 x μ Σt 2t μ t t
We leave it for the reader to verify the identity. By substitution, we have:
1
1
x μ 1 x μ 2 tx
M x t 2 e 2
dV
n
x n
1
1
x μ Σt 1 x μ Σt 2 tμ tt
2 n
e 2
dV
x n
1
1 x μ Σt 1 x μ Σt
2 n
e 2
dV e tμ tt 2
x n
1 e tμ tt 2
e tμ tt 2
The reduction of the integral to unity in the second last line is due to the fact that
1
1 x μ Σt 1 x μ Σt
e 2
is the probability density function of the normal random vector with
2 n
mean μ t and variance Σ.
So the moment generating function of the normal multivariate x ~ N μ, is M x t e tμ tt 2 . As
a check:4
M x t M x t
μ t e tμ tt 2 E x μ
t t t 0
2 M x t μ t e tμ tt 2
tt t
μ t μ t e tμ tt 2
2
M x t
E xx μμ
tt t 0
Var x E xx μμ
E e M e e
E Yj e
Xj ej x
x j
ej μ ej e j 2
e
μ j jj 2
E Y j Yk E e j e X k E e
X
e j e k x
e j ek μ e j ek e j ek 2
e
j k jj jk kj kk 2
e
j jj 2 k kk 2 jk kj 2
e
j jj 2 jk kj 2
e e k kk 2 e
j jj 2 jk jk 2
e e k kk 2 e
E Y j E Yk e
jk
So, Cov Y j , Yk E Y j Yk E Y j E Yk E Y j E Yk e jk
1 , which is the multivariate equivalent of
2
the well-known scalar formula CV 2 e X e σ 1 . The whole variance matrix can be expressed as
4 The vector formulation of partial differentiation is explained in Judge [1988, Appendix A.17].
Var y E y E y e 1nn , where ‘◦’ represents elementwise multiplication (the Hadamard
v1 0 0
product). Defining the diagonalization of a vector as diag v n1 0 0 , we may express
0 0 vn
the variance in terms of the usual matrix multiplication as Var y diag E y e 1nn diag E y .
Var x is positive-definite if and only if e 1nn is positive-definite. Although beyond the scope of
this paper, it can be proven5 that if Σ is positive-definite, as stipulated above, then so too is
e 1nn .6
3. CONCLUSION
The mean and the variance of the lognormal multivariate are straightforward extensions of their
scalar equivalents. Simulating lognormal random outcomes is nothing more than exponentiating
simulated normal random multivariates. Therefore, one faced with the problem of modeling several
heavy-tailed random variables in a mean-variance framework may find an acceptable solution in the
REFERENCES
[1.] Halliwell, Leigh J., “Classifying the Tails of Loss Distributions,” CAS E-Forum, Spring 2013, Volume 2,
www.casact.org/pubs/forum/13spforumv2/Haliwell.pdf.
[2.] Johnson, Richard A., and Dean Wichern, Applied Multivariate Statistical Analysis (Third Edition), Englewood
Cliffs, NJ, Prentice Hall, 1992.
[3.] Judge, George G., Hill, R. C., et al., Introduction to the Theory and Practice of Econometrics (Second Edition), New
York, Wiley, 1988.
[4.] Klugman, Stuart A., et al., Loss Models: From Data to Decisions, New York, Wiley, 1998.
5 A proof involving Shur’s Product Theorem forms part of an unpublished paper by the author, “Complex Random