Reward Profile
by Marcello Minenna (acknowledged as ESCP Europe, Birkbeck, University of London
the quant regulator by Risk magazine
and professor of advanced courses in the
field of financial mathematics), provides a This book fills the gap that exists between the
of Non-Equity
comprehensive guide to a new infrastructure risk management tools available to industry
for risk management, introducing original insiders, and those available to investors. It is a
methods and measures to assess the risk profile welcome contribution that will be helpful
of non-equity financial products. to anyone who needs to assess the risk of
Products
non-equity products.
The book is divided into a three-pillar JAKSA CVITANIC,
risk-based approach, founded on the careful Professor of Mathematical Finance,
analysis of the financial structure of any non- Caltech
equity product.
Three risk indicators: Rigor and clarity characterize this methodology By Marcello Minenna
price unbundling and probabilistic to assess the risk of every non-equity product.
scenarios; Well established stochastic techniques are applied
contributors: Giovanna Maria Boi, Paolo Verzella,
the degree of risk; and in an original way to convey the key information
Antonio Russo, Mario Romeo and Diego Monorchio
the recommended investment time on the time horizon, the degree of risk, the costs
horizon and potential returns of the investment and
reveal the material risks of various non- therefore to match the investors preferences in
equity products. The book provides a detailed terms of liquidity attitude, risk taking, desired
illustration of the analytical tools underlying returns and acceptable losses.
these three indicators. SVETLOZAR RACHEV, By Marcello Minenna
Department of Statistics and Applied Probability,
A Quantitative Framework to Assess the Risk-Reward University of California at Santa Barbara
Profile of Non-Equity Products offers a way for
financial institutions, investors, structurers,
regulators, issuers and academics to better
assess, understand and describe the risks
associated with non-equity products and make
a meaningful comparison between them.
PEFC Certified
A Quantitative Framework
to Assess the RiskReward Profile
of Non-Equity Products
i i
i i
i i
i i
i i
i i
A Quantitative Framework
to Assess the RiskReward Profile
of Non-Equity Products
Marcello Minenna
i i
i i
i i
Haymarket House
2829 Haymarket
London SW1Y 4RX
Tel: + 44 (0)20 7484 9700
Fax: + 44 (0)20 7484 9797
E-mail: books@incisivemedia.com
Sites: www.riskbooks.com
www.incisivemedia.com
Conditions of sale
All rights reserved. No part of this publication may be reproduced in any material form whether
by photocopying or storing in any medium by electronic means whether or not transiently
or incidentally to some other use for this publication without the prior written consent of
the copyright owner except in accordance with the provisions of the Copyright, Designs and
Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency
Limited of 90, Tottenham Court Road, London W1P 0LP.
Warning: the doing of any unauthorised act in relation to this work may result in both civil
and criminal liability.
Every effort has been made to ensure the accuracy of the text at the time of publication, this
includes efforts to contact each author to ensure the accuracy of their details at publication
is correct. However, no responsibility for loss occasioned to any person acting or refraining
from acting as a result of the material contained in this publication will be accepted by the
copyright owner, the editor, the authors or Incisive Media.
Many of the product names contained in this publication are registered trade marks, and Risk
Books has made every effort to print them with the capitalisation and punctuation used by the
trademark owner. For reasons of textual clarity, it is not our house style to use symbols such
as TM, , etc. However, the absence of such symbols should not be taken to indicate absence
of trademark protection; anyone wishing to use product names in the public domain should
first clear such use with the product owner.
While best efforts have been intended for the preparation of this book, neither the publisher,
the editor nor any of the potentially implicitly affiliated organisations accept responsibility
for any errors, mistakes and or omissions it may provide or for any losses howsoever arising
from or in reliance upon its information, meanings and interpretations by any parties.
i i
i i
i i
To my daughter, Michela
i i
i i
i i
i i
i i
i i
Contents
About the Author xi
Foreword xv
Preface xix
Acknowledgements xxv
1 Introduction 1
vii
i i
i i
i i
viii
i i
i i
i i
CONTENTS
6 Conclusions 299
Index 307
ix
i i
i i
i i
i i
i i
i i
xi
i i
i i
i i
i i
i i
i i
List of Contributors
xiii
i i
i i
i i
i i
i i
i i
Foreword
xv
i i
i i
i i
xvi
i i
i i
i i
FOREWORD
xvii
i i
i i
i i
i i
i i
i i
Preface
xix
i i
i i
i i
xx
i i
i i
i i
PREFACE
bonds for example, a bond plus a call option but does not include
vanilla bonds. Nonetheless, these instruments involve the packaging
of two distinct sources of risk: interest rate risk and risk of default.
The latter is pure risk the risk of loss. This highlights the difference
in the quality of information disclosed to investors, and encourages
issuers to prefer these instruments without adequately compensat-
ing the buyer for the risk taken. The recent growth in the offering
of subordinated bonds (that is, a plain vanilla bond with a relevant
source of credit risk) illustrates the flaws in the status quo.
As this approach is developed further, partly as a result of some
regulatory choices, the sector will converge towards a new phase.
There are two alternatives each is different, with distinct impli-
cations for the protection of investors and competition between
financial institutions.
One possibility is that regulators decide to put constraints on the
structure of products or even prohibit the sale of certain instruments
simply because they have the wrong label. For instance, a subor-
dinated or inflation-linked product might be automatically classi-
fied as very risky and complex and deemed unsuitable for retail
investors. This type of approach appears to be gaining ground and
typically involves banning the sale of certain products to specific
groups of customers.
This would be detrimental to the whole market. Issuers would see
important channels of funding closed down, while investors who
want higher coupons and exposure to inflation, for instance, would
not be able to find a product to suit their needs.
Many have argued that retail customers do not read or under-
stand the information provided within a prospectus, and say further
revising the transparency rules would not help enhance investor
protection. But few have questioned why they have not engaged
with the prospectus and no-one has considered whether the
information provided corresponds with what the investor actually
wants. In fact, an alternative to the prohibitionist approach is to
enhance transparency and create greater synergy with the rules
on conduct.
This alternative approach would require transparency that is
focused on the financial structure of the product and the provision
of information that is genuinely critical to the investor. This can be
achieved using quantitative methods.
xxi
i i
i i
i i
xxii
i i
i i
i i
PREFACE
xxiii
i i
i i
i i
1 For more details see Performance anxiety, Risk Magazine, February 2011, pp 4649 (available
at http://www.risk.net/risk-magazine/feature/1939516/academics-changes-performance-
scenario-rules) and The what-if scenario, Structured Products, January 2011, pp 3839
(available at http://www.risk.net/structured-products/feature/1935182/disclosure-regime
-leads-discord-italy).
2 This preface is an excerpt, with minor adaptations, from the opinion piece of the author, titled
Enter the quant regulator and published by Risk Magazine, May 2011, pp 4649 (available
at http://www.risk.net/risk-magazine/opinion/2074539/enter-quant-regulator).
xxiv
i i
i i
i i
Acknowledgements
xxv
i i
i i
i i
i i
i i
i i
List of Figures
xxvii
i i
i i
i i
xxviii
i i
i i
i i
LIST OF FIGURES
xxix
i i
i i
i i
xxx
i i
i i
i i
List of Tables
xxxi
i i
i i
i i
xxxii
i i
i i
i i
Introduction
i i
i i
i i
and, finally, considers the limits of costs and the target performances
sought in the non-equity product.
The liquidity attitude identifies the maximum holding period for
which the investor is willing to give up their cash to buy a financial
product. This attitude also signals the implicit expectation that the
investor will liquidate the product after their holding period under
economically efficient conditions (ie, with a profit) or, in any case,
having at least recovered all costs incurred.
The risk appetite represents the threshold of tolerance of the
investor to the variability of the results that can be obtained from
the non-equity product, and thus the losses they are willing to bear.
From this perspective a key role is played by an integrated assess-
ment and representation of all the risk factors of the product and the
particular ways in which these are assembled within its financial
structure.
The limits of costs indicate the maximum amount that the investor
is willing to pay in expectation of a certain performance over this
period. In this sense, a clear recognition of all the expenses that are
incurred for a non-equity investment and the knowledge of the final
payoffs that a product can offer in relation to the different possible
evolutions of the underlying risks are essential.
The decision variables described above clearly suggest the nature
of the essential information for the investors. From this perspec-
tive, the risk-based approach sets out an objective methodology
to determine and represent three synthetic risk indicators (the so-
called three pillars), all calculated using probabilistic tools, which,
in a clear, meaningful and internally consistent way, meet the infor-
mation needs that emerge when we are interested in comparing and
choosing among the various non-equity products:
i i
i i
i i
INTRODUCTION
the non-equity product the core information about its value consid-
ered at two specific points in time: the issue date and the end of the
recommended investment time horizon, respectively.
The financial investment table decomposes the issue price (so-
called unbundling) to highlight the relative contribution of the fair
value and of the total costs applied, hence providing a first useful
indication about the fairness of the riskreturn profile of any non-
equity product. Where the product is quite elementary, this table,
eventually assisted by some deterministic return indicator, offers
a good synthesis of the costs paid and of the payoffs attainable at
the end of the recommended time horizon. On the other hand, non-
elementary products are typically characterised by non-linear payoff
structures or by hidden risk exposures which require further infor-
mation in order to allow a better understanding of how the under-
lying risks can concretely affect the final values of the investment. In
this perspective, inside the financial investment table, richer infor-
mation is provided by applying the portfolio replication principle
to split the fair value into its risk-free and risky components so that
the investors see how much of the product is similar to a risk-free
security and how much is instead the value of the bet inherent in the
risks of the investment.
The table of probabilistic scenarios supplements the financial
investment table by recovering some key risk information connected
to the peculiar shape taken by the risk-neutral density of the final val-
ues of the investment. In line with the principle of reduction in gran-
ularity, this density is partitioned into a few macro-events that are of
concrete interest for the average investor as they capture the perfor-
mance risk of a product at the end of the recommended time horizon.
This risk represents the products ability to create added value for the
investor both in absolute terms (ie, compared with the issue price),
and in relative terms (ie, compared with the results achievable by
investing the same amount of money in a risk-free asset). The parti-
tion technique used for this comparison is the superimposition of the
density of the risk-free asset to the one of the non-equity product. The
final output is a table containing the probabilities of four alternative
scenarios: negative performances, or positive performances lower
than, in line with or higher than those of the risk-free numraire.
Attached to the probabilities there is a synthetic indicator of the
final value of the investment conditioned to the occurrence of each
i i
i i
i i
i i
i i
i i
INTRODUCTION
i i
i i
i i
i i
i i
i i
INTRODUCTION
i i
i i
i i
process, the investor makes their choice among the products admis-
sible in terms of the first two indicators by reading the tables report-
ing the costs, the fair value and the probabilistic scenarios of each
investment alternative.
In order to reach retail investors, these fundamental indicators
could be included in the prospectus by providing a brief prod-
uct information sheet subject to compulsory delivery. After a quick
overview of the product, this brief document (of two pages, maxi-
mum) would highlight the material risks of the non-equity invest-
ment by displaying the three pillars determined by the issuer. As the
person who built that particular financial structure the issuer has,
hence, the best knowledge of it.
In this way, issuers would produce the synthetic indicators needed
to comply with regulatory provisions by using the same internal
models developed for their pricing and risk management activities;
in fact, the risk-based approach just defines a few basic methodolog-
ical requirements (already used in the praxis of financial markets)
without imposing any particular model to be adopted.
Such a framework would reconcile two major goals:
i i
i i
i i
INTRODUCTION
the goal of always making the three pillars available to any possible
stakeholder, the properties of conciseness and immediate compre-
hensibility make them the best candidates for becoming the stan-
dard information set that identifies any product from the main data
providers.
With regard to the second goal, the filing of a product information
sheet, containing the three pillars, to the regulator would support the
efficacy of the initiatives undertaken by supervisors in order to guar-
antee the consistency and comprehensibility of the information con-
cerning the material risks of the product and also in order to verify
whether, at the points of sale, this information is taken into account
by distributors that have to assess the adherence of an investment
solution to the needs of their clients. In fact, each product would
be qualified by a set of quantitative metrics that react to changing
market conditions and move consistently in relation to each other
according to a precise scheme arising from the peculiar structure
of the product itself. It follows that, with only the knowledge of
the products structure and of the market data publicly available at
the issue date, regulators could objectively verify the internal coher-
ence of the information produced by the issuer through the three
pillars, ultimately leading to the early detection of incorrect or mis-
leading representations, hence giving an important preventive role
to transparency supervision.
The risk-based approach can also be used for an effective assess-
ment of the risks and costs associated with financial liabilities, whose
structures are, in fact, completely uniform and symmetrical to those
of investment products, and, like them, typically carry a specific risk
exposure or lead to revision of a pre-existing exposure, through the
additional inclusion of derivative components.
The book consists of six chapters. Chapters 24 are devoted
to a rigorous theoretical derivation of the risk-based quantita-
tive approach, while Chapter 5 exposes its application to some
non-equity products and Chapter 6 presents conclusions.
Chapter 2 describes the first pillar. The focus is on the value em-
bedded in the product: at the issue date, through the unbundling of
the issue price and at maturity, through an innovative and robust
methodology to build probabilistic scenarios that summarise the
salient features of the variability characterising the potential per-
formance of the investment. Different techniques are presented
i i
i i
i i
10
i i
i i
i i
INTRODUCTION
1 If the product to be analysed is intended to replace a pre-existing investment (as in the case
of exchange public offerings or when an investor is willing to substitute a specific non-equity
product that they are holding), the table of probabilistic scenarios is determined according to
a different methodology based on the pointwise comparison (so-called trajectory by trajec-
tory) among the final values of the two products involved in a possible exchange. A similar
methodology can easily be used to assess the opportunity to enter in a structured financial
liability or to modify the cashflows of an existing liability by combining it with a non-equity
solution like a derivative contract.
11
i i
i i
i i
i i
i i
i i
13
i i
i i
i i
if the price paid by the investor is set at a value of 100, this table
illustrates the relative contributions of the fair value and of all costs
applied.
Should the price and the fair value coincide, then the reward given
to the investor to compensate for the product risks is in line with
that expressed (even in an implicit way) by the market. Otherwise, a
price higher than the fair value would signal that a part of the reward
that the market expects in order to take those risks is withheld by
the issuer as compensation for their expenses or, more generally, as
their profit.
The unbundling of the price into the two above-mentioned com-
ponents therefore offers important information on the costs of the
product or, equivalently, on the fairness of its riskreturn profile.
However, it does not allow for a full understanding of the important
elements of the financial structure of many products.
Unlike shares, many non-equity products (especially those pursu-
ing a target return) exhibit non-elementary financial structures, typ-
ically due to the presence of one or more derivative components that
trigger significant exposures to market and credit risks. Moreover,
the specific way in which these components are combined to obtain a
unique product is often counter-intuitive, if not completely hidden,
so that it becomes quite difficult for retail investors to understand
how their risks can concretely affect the payoff of the investment.
These aspects suggest supplementing the financial investment
table with a richer informative set. This can be achieved both by
increasing the level of detail of this table and by completing it
with suitable indicators of the degree of uncertainty that affects the
potential performances of the product.
With regard to the first task, attention must be paid to identify-
ing information which could really meet investors needs. From this
perspective it is evident that it would be unnecessary (or even con-
fusing) for the average investor to receive a technical representation
of all of the different components which combine to determine the
fair value and the possible payoffs of a given product. Instead, as
explained in more detail in Section 2.3.1, a more reasonable alter-
native is to provide a breakdown of the fair value into its risk-free
component and its risky component. Such a representation can be
easily obtained from an application of the portfolio replication prin-
ciple, and it has the advantage of indicating to the investor how
14
i i
i i
i i
15
i i
i i
i i
16
i i
i i
i i
17
i i
i i
i i
18
i i
i i
i i
any cost item (whose amount is often random) applied during the
life of the product.
Simulations must also suitably deal with products including path-
dependent features which can trigger an early redemption (like
callable or puttable bonds, American or Bermudan options (Hull
2009)), the coupon size or existence or the switch to another pay-
off structure (eg, flipping the coupons from fixed to floating or vice
versa).
In addition, when the product combines, even in a synthetic way,
two or more components, it may be necessary to first determine
separately the trajectories of the different components and then to
put them together by paying attention to the sign and the intensity
of their correlations and to the manner by which they conform to
the payoff structure of the investment.
A very simple example of products resulting from the (synthetic)
packaging of different components are plain-vanilla bonds exposed
to the credit risk of a given reference entity. Indeed, as explained
in more detail in Section 2.3.1, they can be replicated by combin-
ing a risk-free bond with a short position in a credit derivative on
the same reference entity, usually coinciding with the issuer of the
risky bond.
The risk-neutral density of the final value ST of a defaultable bond
can be obtained from a Monte Carlo simulation of the interest rate
term structure and of the random variable representing the default
time of the issuer. The models underlying the simulation of the inter-
est rates and the credit event dynamics need to be properly calibrated
in order to reflect the market conditions at the time of issue.
In the simple case of defaultable plain-vanilla bonds, the density
function of ST exhibits two modes: the first represents the trajectories
in which the default events occur and, therefore, this mode falls in the
region corresponding to negative returns and is close to the recovery
value; the second mode represents the trajectories not affected by a
credit event, and consequently it lies in the area of positive returns
with a placement that depends on the specific coupon structure of the
bond and, hence, on the level of the spread, if any, over the risk-free
return paid to investors to compensate the credit risk exposure.
The procedure described above for defaultable bonds can easily
be extended to simulate the dynamics of whatever non-equity prod-
uct exposed to the credit risk of one or more reference entities (eg,
19
i i
i i
i i
20
i i
i i
i i
where
21
i i
i i
i i
is taken under the risk-neutral measure, that is, without making any
assumption on investors preferences.
Equation 2.1 also confirms that the two ingredients for obtaining
S0 are the risk-neutral density of the products final values (ie, ST )
and that of the final values of an investment in the risk-free asset
(ie, BT ).
The first density can be obtained through Monte Carlo simulations
of the key variables of the non-equity product, after having suitably
modelled them. The risk-neutral density of BT is determined in a
similar way, provided that the parameters and the variables shared
by the two simulations are assigned the same values.
Once the fair value is known, the unbundling of the full initial
price of the non-equity product is straightforward. In particular, the
difference between price (set equal to 100) and fair value (expressed
in percentage terms) represents the whole percentage costs of the
investment. Apart from explicit up-front charges (expressed in per-
centage terms), this quantity is the discounted expected value of the
total costs that will be borne by investors over the time horizon of
the product under the risk-neutral measure.
This representation includes all costs regardless of the time and
the way (either explicit or implicit) in which they are charged.
The aggregation in a unique item of the financial investment table
enhances comparability of products by offering a streamlined indi-
cation of how expensive they are if retained until the end of the
recommended time horizon.
1It is worth observing that it is only for return-target products that
the fair value is uniquely determined by calculating Equation 2.1 at
a certain time T: their payoffs depend, often non-linearly, on some
underlying assets or reference values (and, hence, on the underly-
ing risks) according to specific algorithms ceasing at T. This means
that at time T only (which, as explained in Chapter 4, identifies the
implicit time horizon of the investment) the definition of the payoff
structure is completed by a proper boundary condition, which gives
to the risk-neutral density of ST a precise shape which is peculiar to
that maturity.
Illiquid return-target products are sometimes assisted by some
services of liquidability enhancement aimed at making the invest-
ment more appealing and allowing earlier redemption under par-
tially secure conditions. These services can be provided either by a
22
i i
i i
i i
23
i i
i i
i i
24
i i
i i
i i
behind the results attainable over the period spanned by their time
horizon.
In the absence of a suitable informative set, non-professional
investors tend not to care about the volatility of financial variables
underlying the products and their assessments typically rely on
the implicit assumption that spot values of these variables will not
change over the life of the investment.
Moreover, the non-linear way in which many derivatives are
nested in the engineering of the product can be very sophisticated
and counter-intuitive, hence they merit stronger disclosure tools to
prevent investors from taking their decision on the basis of biased
beliefs.
Products embedding implicit derivatives can be even more insid-
ious since their influence on the riskiness and the opportunities of
the investment is hard to be correctly appreciated by investors. A
meaningful example is once again plain-vanilla bonds with signifi-
cant credit risk exposures. The simplicity of their payoffs structure
attracts investors who commonly sort the different alternatives just
by looking at the size of the coupons. This typical investor behaviour,
however, has the inconvenience of implicitly assuming that the
coupon size is proportional to the material credit risk of the security
(which is not necessarily true) or, even worse, of completely disre-
garding this risk. As explained in Section 2.1, despite their appar-
ent simplicity, the risk-neutral density function of these bonds takes
a bimodal shape reflecting how the likelihood of the credit event
strongly affects either the probability or the severity of losses or both.
Information conveyed by the second pillar of the risk-based
approach (namely, the degree of risk; Chapter 3) is a useful warn-
ing of the overall riskiness of non-elementary products, and the
financial investment table aids understanding of how expensive
they are. However, in order to support enlightened investment deci-
sions, these products require supplementary information highlight-
ing the contribution given to the fair value by the risk-free compo-
nent and the risky component, respectively, and how the various
risks involved can affect the final values achievable by investors.
Inside the first pillar, a further analysis of the risk-neutral density
of ST satisfies the first requirement by increasing the informative
detail of the financial investment table, and the second by using a
synthetic table of probabilistic scenarios.
25
i i
i i
i i
26
i i
i i
i i
27
i i
i i
i i
28
i i
i i
i i
swapT = swap+
T swapT (2.8)
where
swap if swapT > 0
T
swap+
T = (2.9)
0 if swapT 0
and
0 if swapT > 0
swap
T = (2.10)
swap if swapT 0
T
29
i i
i i
i i
30
i i
i i
i i
2.0
104
1.5
1.0
0.5
0
0 20 40 60 80 100 120 140 160 180 200 220 240
31
i i
i i
i i
32
i i
i i
i i
Figure 2.2 Densities of the same non-equity product obtained with two
different models
3.0
3.5 HullWhite
CoxIngersollRoss
3.0
2.5
104
2.0
1.5
1.0
0.5
0
0 20 40 60 80 100 120 140 160 180 200 220 240
33
i i
i i
i i
The above arguments indicate that the raw disclosure of the risk-
neutral density is not a viable way to fill the above-mentioned
information gap, since the richness and flexibility of information
connected with such density would come at the cost of an increased
complexity of comprehension for the average investor.
What is really needed is a method able to efficiently exploit the
information embedded in the risk-neutral density and to smooth the
differences between densities generated by different models.
The solution envisaged by the first pillar is the so-called table of
probabilistic scenarios (hereafter also referred to as the probability
table), which summarises the salient features of the variability char-
acterising the potential performance of the investment by partition-
ing the density into a few mutually exclusive macro-events which
are of concrete interest for the investor.
In general terms, this solution relies on the well-known princi-
ple of reduction in granularity, which turns out to be very helpful
in diminishing the relevance of the model risk. Returning to the
example of Figure 2.2, an elementary partition of the two densities
displayed there in two complementary events gives very similar
probabilities. For instance, by defining the two events with respect
to the issue price of 100, ie, the final value of the investment is lower
than the issue price and the final value of the investment is higher
than the issue price, the probabilities of these events are the same,
namely 26.8% for the first event and 73.2% for the second one.
Clearly, by choosing a different partition, the probabilities of a
given event under the two models could differ. And, in general
terms, the more irregular the financial structure of the product the
more likely it is that the same partition applied to densities obtained
from different models could exhibit some differences. But, typi-
cally they will not be significant, especially when the interest is in
capturing the key message conveyed by a specific partition.
From a technical point of view, there are infinitely many ways
to aggregate the elementary events of a probability distribution in
macro-events which are mutually exclusive; and, also, in principle,
any individual might want to know in depth specific subsets of the
risk-neutral density. However, the need for endowing all investors
with the same core information and to preserve the comparability
across products requires consideration of the same macro-events for
all non-elementary return-target products.
34
i i
i i
i i
which reveals that the unique source of uncertainty for the risk-free
asset are the movements in the interest rates curve.6 Hence, the risk-
neutral density of the final values of an initial investment of 100
in the risk-free asset reproduces exactly the impact of interest rate
volatility on the returns of a financial investment, ensuring that the
comparison with the non-equity product highlights the influence of
the specific features that characterise such a product.
The partition technique used to make this comparison is the super-
imposition of the two densities. According to this technique, the
risk-neutral density of the product is partitioned with respect to
fixed thresholds which are exogenously identified depending on the
35
i i
i i
i i
point of zero return (ie, the final value of the investment is equal to
100) and on the risk-neutral density of the risk-free asset. This allows
the information connected with the second moment of the products
probability distribution to be highlighted, and, hence, the implica-
tions of the volatility of its returns on the payoffs that investors can
face at the end of the investment time horizon to be understood.
The full methodology adopted to determine the probability table
is explained in Section 2.3.3. The final output is a table which displays
the probabilities of four alternative macro-events and a synthetic
indicator of the final value of the investment associated with each of
them. For some products, depending on the specific shape of their
risk-neutral density function, this information set is supplemented
by further indicators in order to guarantee the complete investors
comprehension and a honest comparison with the risk-free asset.
Regardless of these technical aspects, it is clear that by prop-
erly exploiting the principle of reduction in granularity, the table
of probabilistic scenarios attains the goal of reducing the amount
of available data to be handled by the investor by preserving, at
the same time, the core of the additional information contained in
the risk-neutral probability density with respect to the fair value,
so that investors achieve a much better awareness of the overall
performance risk behind a non-elementary product.
In fact, although in general a higher fair value corresponds to a
greater profitability of the investment, this indication is only true
on average as stated above. On the other hand, the probability table
has the advantage of coming from the same risk-neutral density
used to determine the fair value and of preserving, at the same time,
the fundamental information related to the specific shape of this
distribution without being excessively exposed to the model risk.
In this way, fair values and time horizons being equal, it becomes
possible to understand the effect that the investment risks have on
investment performances, hence qualifying from time to time a safer
product (similar in substance to the risk-free asset) or, conversely, a
particularly daring product which, for example, combines a high
probability of obtaining results more desirable than the risk-free
asset with a non-negligible probability of receiving returns that are
positive but not competitive or even negative returns.
It is worth pointing out that, for products representing a direct
financial investment, the probabilistic scenarios do not allow for a
36
i i
i i
i i
37
i i
i i
i i
38
i i
i i
i i
15,000
10,000
5,000
0
0 50 100 150 200 250 300 350 400
39
i i
i i
i i
1 2
15,000
10,000
5,000
0
0 50 100 150 200 250 300 350 400
40
i i
i i
i i
1 2
2.0
1.8
Risk-free asset
1.6
1.4
1.2
104
1.0
0.8
0.6
0.4
0.2
0
0 50 100 150 200 250 300 350 400
41
i i
i i
i i
1 2
2.0
1.8
1.6
1.4
1.2
104
1.0
0.8
0.6
0.4
0.2
0
0 50 100 150 200 250 300 350 400
Risk-free asset
Final value lower than the issue price
Final value lower than that of the risk-free asset
Final value in line with that of the risk-free asset
Final value higher than that of the risk-free asset
42
i i
i i
i i
the final value of the investment is lower than the issue price
(for short: lower than the issue price);
the final value of the investment is higher than the issue price
but lower than that of the risk-free asset (for short: lower than
the final value of the risk-free asset);
the final value of the investment is higher than the issue price
and in line with that of the risk-free asset (for short: in line with
the final value of the risk-free asset);
the final value of the investment is higher than the issue price
and higher than that of the risk-free asset (for short: higher
than the final value of the risk-free asset).
43
i i
i i
i i
Probabilities
Scenarios (%)
Table 2.2 Table of probabilistic scenarios for the product of Figure 2.6
Probabilities
Scenarios (%)
44
i i
i i
i i
than the final value of the risk-free asset provide a first indication
of the dispersion of the non-equity product at time T.
However this indication cannot be deemed exhaustive for two
reasons.
Firstly, the products density is partially affected by the same
source of variability as the risk-free asset. This comes from the fact
that, by working under the risk-neutral measure, the volatility of
the interest rates also affects the behaviour of the stochastic process
of the non-equity product. Such influence will be more or less rel-
evant depending on the specific features of the product considered
(eg, a plain-vanilla bond will typically undergo this influence more
than a certificate whose underlying asset is a share or a basket of
shares). But in any case, because of the risk-neutrality, it remains an
implicit correlation between the two densities involved in the super-
imposition, and this correlation entails that the probabilities of the
macro-events lower than the final value of the risk-free asset and
higher than the final value of the risk-free asset are not sufficient to
obtain a proper disclosure of the variability of the final performances
of the non-equity investment.
Secondly, a table that contains only the probabilities of the four
scenarios is necessarily blind to the way in which elementary events
are aggregated within each single macro-event of the partition. In
other words, the four probabilities alone would not provide an effec-
tive synthesis of the specific probability density of a non-equity prod-
uct because they ignore the behaviour of its final values over each
of the four intervals identified on the x-axis by the fixed reference
thresholds.
To better understand this point, Figure 2.7 shows the density of
the same product considered so far (old), which has a fair value
of 88.87, together with the density of another non-equity product
(new), whose fair value is 85.03.
From the figure it is easy to see that the riskreturn profiles of
the two products are quite different. Nevertheless, applying the ref-
erence thresholds (ie, the point of zero return, 1 and 2 ) these
two investment alternatives display the same probabilities for each
scenario.
This example highlights the fact that, in order to provide a suit-
able synthesis of the performance risk of a non-equity product and
to ensure a fair comparison across the different products available
45
i i
i i
i i
1 2
2.0
New product
Old product
1.5
104
1.0
0.5
0
0 50 100 150 200 250 300 350 400
46
i i
i i
i i
Mean
Probabilities values
Scenarios (%) ()
47
i i
i i
i i
Table 2.4 Table of probabilistic scenarios with the conditional means for
the old product
Mean
Probabilities values
Scenarios (%) ()
Table 2.5 Table of probabilistic scenarios with the conditional means for
the new product
Mean
Probabilities values
Scenarios (%) ()
48
i i
i i
i i
For instance, coming back to the two products whose densities are
shown in Figure 2.7, the corresponding probability tables are given
respectively in Tables 2.4 and 2.5.
This example clearly explains that the further information on the
mean values gives an interesting insight into the peculiarities of each
of the two products that cannot be appreciated otherwise. In fact,
the conditional mean values of the first product outperform those
of the second one in three out of four scenarios; this indication and
the limited weight (only 4.2% of probability) of the fourth scenario
(where the second product is winning) allow for an immediate and
meaningful comparison of the two investment alternatives.
It is worth stressing that the data contained in the table of proba-
bilistic scenarios have a strong relationship with the fair value of the
non-equity product reported inside the financial investment table.
In fact, the fair value being merely the discounted expected value
of the risk-neutral density of ST , it follows that by discounting back
(along the deterministic interest rates curve observed on the market)
to the issue date the mean of each scenario and then averaging these
discounted values by using the probabilities in the table, the result
is a rough approximation of the fair value of the non-equity prod-
uct. Hence, if such a proxy is too different from the fair value given
inside the financial investment table, this gap signals that there is an
inconsistency in the overall information provided by the first pillar
which deserves to be carefully investigated.10
This point emerges by considering again the two products in Fig-
ure 2.7. In fact, the fair value of the first product is 88.87 which is
very close to its proxy (ie, 88.83) obtained on the basis of the data
displayed in Table 2.4, while the fair value of the second product is
85.03 and also this value is very close to the proxy (85.12) obtained
from its probabilistic scenarios illustrated in Table 2.5.
In order to complete the explanation of the methodology under-
lying the probability table it is necessary to consider the choice of
partitioning the density of a product through the superimposition
technique, ie, with reference to fixed thresholds identified on the x-
axis. As mentioned in the previous section, this technique does not
make a pointwise comparison between the final values of the prod-
uct and those of the risk-free asset. Rather, it tells how frequently
the non-equity product performs in a range that is compatible with
the results of the risk-neutral numraire (scenario in line with the
49
i i
i i
i i
final value of the risk-free asset), and how frequently the outcomes
of the non-equity investment fall in a range of values that are either
more or less appealing than those of the said numraire (scenarios
higher than the final value of the risk-free asset and lower than
the final value of the risk-free asset respectively).
In order to better highlight this point, it is useful to consider the
macro-event in line with the final value of the risk-free asset. By
applying the methodology described so far it could well happen
that the non-equity product would have most of its final values very
close to the lower threshold 1 , implying that in most of the cases
the risk-free asset would perform better than the product, but by
definition this interesting information would not be captured by the
probability table as it would report an unchanged probability for the
mentioned macro-event.
The example shows that the methodology adopted to build the
table can have a weakness in providing a proper comparison with
the risk-free asset, especially if the dispersions of the two target prob-
ability densities are comparable in magnitude. This condition can be
satisfied when the non-equity product is characterised by low intrin-
sic volatility and high correlation with the interest-rate volatility, as
in the cases of monetary funds, fixed- and floating-coupon bonds. In
fact, in these cases, by construction the non-equity product would
be designed to perform in a range of values that is comparable to
the possible results of an investment in the risk-free asset, and so the
procedure of superimposition would tend to produce results with
zero or little probability in the extreme scenarios and a very high
probability mass in the scenario in line with the final value of the
risk-free asset, and the mean value alone would not be enough to
allow an honest comparison with the risk-free alternative.
The same effect of loss of significance may be observed for very
long time horizons (such as in the case of long-dated bonds and
products with similar levels of risk and duration) as the volatility of
the instantaneous risk-free rate cumulates over time and the macro-
event in line with the final value of the risk-free asset tends to
incorporate most of the final outcomes of the non-equity product.
Figure 2.8 illustrates an example where a five-year non-equity
product has a density characterised by a degree of dispersion similar
to the one of the risk-neutral numraire.
The corresponding probability table is given in Table 2.6.
50
i i
i i
i i
1 2
6
4
104
0
60 70 80 90 100 110 120 130 140 150 160
Table 2.6 Table of probabilistic scenarios for the product of Figure 2.8
with the conditional means
Mean
Probabilities values
Scenarios (%) ()
51
i i
i i
i i
Mean values
Probabilities (risk-free
Scenarios (%) mean) ()
52
i i
i i
i i
Table 2.8 Table of probabilistic scenarios for the product of Figure 2.8
with the conditional means and the (unconditional) mean of the risk-free
asset
Mean values
Probabilities (risk-free
Scenarios (%) mean) ()
53
i i
i i
i i
new product the investor takes a long position on it and, at the same
time, by abandoning the old product, in a certain sense, they are
taking a short position on the original investment.
A typical case of these non-equity exchange structures is found in
connection with exchange public offerings, where an issuer proposes
to investors the substitution of a new product for a product bought
at a given past date, but sometimes such situations can also arise to
satisfy an autonomous decision of an investor willing to replace a
specific product held in their portfolio of financial assets.
In general terms, it is straightforward to consider that when the
offer of an illiquid non-equity product takes place in a context that
already includes a well-identified financial alternative, this alter-
native is a much better candidate than the risk-free asset to provide
investors with a proper disclosure of the performance risk associated
with the new proposed product.11 In fact, in the aforementioned sit-
uations it is natural that a complete assessment of the ability of the
new product to create added value for investors necessarily requires
a comparison with the final payoffs achievable in the case where the
investor would decide not to switch to the new product and continue
to hold the old one.
The existence of a concrete investment alternative (ie, maintain-
ing the old product) not only rules out the need for a probabilis-
tic comparison with the risk-neutral numraire, but it also explains
the abandonment of the superimposition technique in favour of a
methodology apt to make a direct comparison between the final
values of the new product and of the old one.
As mentioned in Sections 2.3.2 and 2.3.3, such a methodology is
the one which makes use of the trajectory-by-trajectory technique
to perform a pointwise comparison between the densities of two
alternative financial investments.
To understand the rationale behind this technique, it is first of
all worth recalling that, as mentioned in Section 2.1, both densi-
ties can be obtained by properly modelling and simulating via stan-
dard Monte Carlo approaches each product until the end of the rec-
ommended time horizon.12 Moreover, the use of the risk-neutral
measure Q implies that each trajectory of one of the two products
is unequivocally associated with a given trajectory of the instant-
aneous risk-free rate. The same obviously also holds for the other
product. Hence, by trivially applying the transition property, all the
54
i i
i i
i i
7
6 Product new1
Old product
5
4
104
3
2
1
0
40 50 60 70 80 90 100 110 120 130
Figure 2.10 Density of the differences between the product new1 and
the old product
12
11
4
104
0
25 20 15 10 5 0 5 10
55
i i
i i
i i
56
i i
i i
i i
Probabilities
Scenarios (%)
Probabilities
Scenarios (%)
57
i i
i i
i i
15.0
Product new2
Old product
7.5
104
5.0
2.5
0
20 40 60 80 100 120 140
Figure 2.12 Density of the differences between the product new2 and
the old product
3.0
2.5
2.0
104
1.5
1.0
0.5
0
20 15 10 5 0 5 10 15 20
58
i i
i i
i i
59
i i
i i
i i
defined respectively as
0
1
Q
E (DT | DT < 0) = x fDT (x) dx
Q(DT < 0)
+
(2.22)
1
EQ (DT | DT 0) = x fDT (x) dx
Q(DT 0) 0
From an operational point of view, the conditional means expressed
in Equation 2.22 can also be obtained easily by processing the proba-
bility density of DT . At the end of these quantitative calculations, the
table of probabilistic scenarios can be filled by using the standard
template shown in Table 2.11.
For each scenario, the indication of the pair probabilitycon-
ditional mean provides essential information for investors to under-
stand whether the new product presents a more appealing risk
return profile than the old one. Furthermore, through this informa-
tion set, the investor is effectively supported in the selection between
alternative exchange hypotheses.
Taking the two cases considered in Figures 2.9 and 2.11, respec-
tively, the insertion of the data about the conditional means helps the
identification of the more efficient solution. In fact, by comparing the
probability tables of the two alternatives (see Tables 2.12 and 2.13),
it is evident that the second exchange hypothesis outperforms the
first one.
Another key property of the table of probabilistic scenarios set up
according to the layout exposed in Table 2.11 is its strong connec-
tion with the initial theoretical value of the synthetic swap associ-
ated with any given non-equity product proposed for an exchange.
Indeed, a good proxy of such theoretical value can be obtained by
discounting back to the exchange date the conditional mean of each
60
i i
i i
i i
61
i i
i i
i i
9
8 Product new3
Old product
7
104 6
5
4
3
2
1
0
20 40 60 80 100 120 140 160
62
i i
i i
i i
Figure 2.14 Density of the differences between the product new3 and
the old product
11.0
10.5
2.0
104
1.5
1.0
0.5
0
50 40 30 20 10 0 10 20 30
63
i i
i i
i i
Table 2.14 Layout of the conditional values on the tails for a non-equity
exchange product
the region of the x-axis which ends at that quantile (left tail) or starts
at it (right tail). The latter indicator seems preferable since, unlike
the sole quantile, it is sensitive to the specific shape of the tail, and
hence conveys more valuable information to investors.
The quantile associated with the probability mass P1 cumulated
on the left tail of the density of DT is denoted by 1 , and the quantile
associated with the probability mass (1 P2 ) cumulated on the right
tail of the same density is denoted by 2 , ie
Q[DT 1 ] = P1
(2.23)
Q[DT 2 ] = 1 P2
In this work, P1 is set 2.5% and P2 is equal to (1 P1 ).14 Recalling
that fDT () denotes the risk-neutral density of DT , the sought condi-
tional expected values (so-called conditional values on the tails) are
defined as15
1
1
EQ [DT | DT 1 ] = x fDT (x) dx
2.5%
+ (2.24)
1
Q
E [DT | DT 2 ] = x fDT (x) dx
2.5% 2
Like the conditional means of the two scenarios of the probability
table, the conditional values on the tails expressed in Equation 2.24
can also easily be obtained by simple calculations based on the sim-
ulated probability density of DT . After these calculations, the two
additional indicators can be represented using the layout shown in
Table 2.14.
Evaluating the conditional values on the tails for the densities
shown in Figures 2.10 and 2.14, respectively, yields the numbers
reported in Tables 2.15 and 2.16.
By comparing these two tables any investor can easily understand
that, probabilities and conditional means being equal, by exchanging
64
i i
i i
i i
Table 2.15 Conditional values on the tails for the exchange hypothesis
old versus new1
Table 2.16 Conditional values on the tails for the exchange hypothesis
old versus new3
the old product with the product new1 they are moving to a position
which reduces the overall riskiness of the performances achievable
at the end of the time horizon more than if they were to opt for
swapping their original investment with the product new3.
The methodology exposed in this section can also be extended
with minor adaptations to situations where the non-equity financial
structure is embedded into a financial liability (such as a mortgage)
held by a retail subject, like small or medium firms or municipali-
ties and other similar entities and it is aimed at mitigating the risks
and reducing the costs of that liability.16 The persistent validity and
usefulness of this approach are ensured by simply considering that,
except for the sign of the cashflows (which is in fact negative), struc-
tured liabilities feature a financial engineering that mirrors that of
non-equity investment products, and, exactly like investment prod-
ucts, they typically lead to a specific risk exposure or to the modifi-
cation of an outstanding exposure, also through the embedding of
derivative-like components.
One possible example of interest could be that of a retail individ-
ual who had previously signed a fixed-rate mortgage, and in the face
of a downward movement in the interest rate curve is now looking
for a financial contract (most likely an interest rate swap) that allows
65
i i
i i
i i
66
i i
i i
i i
67
i i
i i
i i
68
i i
i i
i i
69
i i
i i
i i
16
IRR = 3.06%
14
Average annual return = 2.68%
12
10
104
0
40 50 60 70 80 90 100 110 120
for taking the credit risk of the issuer, the bond is paying a spread: if
this spread is at least in line with the one required by the market, then
the fair value will be no lower than the price. Instead, an internal
rate of return too low in absolute terms (or, in any case, equal to or
below the risk-free rate) means that there is a gap between price and
fair value which identifies the implicit costs incurred by investors.
Example 2.6 clarifies these concepts with the support of graphical
illustrations also confirming the validity of simplified return indi-
cators to highlight the performance risk of elementary products in
alternative to the table of probabilistic scenarios.
Example 2.6. Figures 2.15 and 2.16 show the risk-neutral densities of
two five-year bonds, the former is a flat fixed-coupon bond and the
latter is a floating-coupon bond plus a constant spread. Both bonds
have an average annual credit spread around 40 basis points and
pay an equivalent extra-return over the risk-free rate; consequently,
their fair value is 100: no implicit costs are charged to investors and
the average annualised return is around 2.68%, ie, the same as the
risk-free rate referring to the five-year maturity. The internal rate of
return is higher and equal to around 3.06%, signalling that investors
are rewarded for entering products exposed to the credit risk of the
issuer. However, the two numbers (ie, 2.68% and 3.06%) are similar,
indicating that, the left-side mode being very small, the internal rate
of return can be used as a suitable return indicator. The fixed-coupon
bond exhibits a lower dispersion around the mean value, while the
70
i i
i i
i i
18,000
16,000
14,000 Average annual return = 2.68% IRR = 3.06%
12,000
1,0000
8,000
6,000
4,000
2,000
0
40 50 60 70 80 90 100 110 120 130 140
floating one has a wider range of possible returns due to the impact
of the volatility of the interest rates on the level of the coupons and
on their compounding until maturity.
71
i i
i i
i i
72
i i
i i
i i
73
i i
i i
i i
74
i i
i i
i i
1 CPPI and OBPI stand for Constant Proportion Portfolio Insurance and Option-Based Portfolio
Insurance, respectively; they are management techniques specifically aimed at protecting a
certain percentage of the value of the financial investment by combining low-risk assets (which
play the main insurance role) with risky assets (which are used to pursue extra-returns).
2 It is worth noting that at time t = 0 Equation 2.2 assumes an investment of unit value in the
risk-free asset.
3 Similar services affect the riskreturn profile of the investment and its costs regime and, as
explained in Chapter 4, allow for the identification of a minimum time horizon besides the
one which is implicit in the product.
4 Clearly, if the non-equity products provide amortising solutions of capital redemption
Equation 2.4 must be properly rearranged.
5 For details about this technique see Section 2.3.4, which explains the core of the methodology
to carry out a proper probabilistic comparison between two non-equity products involved in
an exchange.
6 The concept of the risk-free asset is close to that of the risk-free security of Definition 2.1,
Section 2.3.1. However, the former does not inherit the schedule of the cashflows of the non-
equity product. This is due to its role of numraire which, in the perspective of an objective
probabilistic comparison, requires it to be the same for any non-equity return-target product.
7 This event is of great importance in determining the minimum recommended investment
time horizon. See Chapter 4.
8 Other characterisations that are line with the principles described may be explored.
9 It is worth noting that if one knows that ST cannot take values lower (higher) than a given
finite value, then the lower (upper) extremum of the integral appearing in the right-hand side
of the first (last) equation of Expression 2.19 has to be determined by replacing (+) with
such a finite value.
10 Clearly, the goodness of this approximation tends to diminish the longer the recommended
time horizon of the product. In fact, when the four mean values reported in the probability
table are referred to quite long maturities, their discounting through a deterministic interest
rates curve tends to lose too much information about the variability associated with the interest
rates curve itself. In similar cases, a more reliable estimate of the fair value of the product is
obtained by discounting back at the issue date any possible final value of the investment over
the corresponding path of the risk-free rate.
11 The same kind of disclosure is also useful in the case where the new non-equity product is
liquid, provided that the investor is interested in an exchange-and-hold strategy.
12 If the two products have different implicit time horizons T1 and T2 with T1 < T2 , the longer
time horizon prevails and the final values at time T1 of the shorter product are compounded
until time T2 at the risk-free rates.
75
i i
i i
i i
13 Clearly, the goodness of this approximation tends to diminish the longer the final maturity of
the two products involved. In fact, when the two mean values reported in the probability table
are referred to quite long time horizons, their discounting through a deterministic interest rates
curve tends to lose too much information about the variability associated with the interest
rates curve itself. In similar cases, a more reliable estimate of the initial theoretical value of
the synthetic swap associated with a specific exchange hypothesis is obtained by discounting
back at the exchange date the final value of any possible difference (ie, DiT , i = 1, 2, . . . , m)
over the corresponding path of the risk-free rate.
14 Other characterisations that are in line with the principles described may be explored.
15 It is worth noting that if it is known that DT cannot take values lower (higher) than a given
finite value, then the lower (upper) extremum of the integral appearing in the right-hand side
of the first (last) equation of Expression 2.24 has to be determined by replacing (+) with
such finite value.
16 Clearly, in these cases the descriptions of the events considered in the table of probabilistic
scenarios (see Table 2.11) and in the table of the conditional values on the tails (see Table 2.14)
need to be properly revised. For an example see Section 5.6.
17 The other element which can affect the shape of the risk-neutral density are predetermined
amortising plans which typically increase the dispersion of the final payoffs distribution.
18 Typically, the internal rate of return is slightly above this mean value exactly because this
implicit assumption does not correspond perfectly to reality.
19 It is worth noting that ratings alone can be weak information to rely on. The reason is that
ratings are often based on very long historical time series, a feature that causes an inherent
inertia of the ratings. As a consequence, the deriving credit risk measurements can be poorly
related with the actual financial conditions of the issuer at a time close to the issue date.
76
i i
i i
i i
77
i i
i i
i i
78
i i
i i
i i
79
i i
i i
i i
80
i i
i i
i i
81
i i
i i
i i
82
i i
i i
i i
83
i i
i i
i i
which governs the stochastic process of the variance, {t2 }t0 , of the
product managed by the automatic asset manager and where
84
i i
i i
i i
85
i i
i i
i i
80,000
70,000
60,000
50,000
40,000
30,000
20,000
10,000
0 5 10 15 20 25
(%)
86
i i
i i
i i
where
Equation 3.5 describes the stochastic process of the product
value, {St }t0 ,
r is the risk-free rate,4
t2 is the variance of the process {St }t0 and its stochastic
process is governed by the stochastic differential equation 3.1
whose parameters , and v are calibrated as described in
Section 3.2,
{W1,t }t0 and {W2,t }t0 are two standard Brownian motions
under Q and are linked by a correlation coefficient (ie,
dW1,t dW2,t = dt).5
Remark 3.5. The pair composed of the stochastic differential equa-
tions 3.5 and 3.1 is known as the Heston model (Heston 1993).
Through the model of Proposition 3.4 it is possible to simulate
the time evolution of the annualised volatility of daily returns of the
product managed by an automatic asset manager with a risk budget
[min,AAM , max,AAM ].
Indeed, once this interval is known, the model identified by the
stochastic differential equations 3.5 and 3.1, where the parameters
are chosen in accordance with Equations 3.23.4, allows simulation,
87
i i
i i
i i
tk = k t (3.6)
where
(i) 1
k
(i )
Rtk = Rt j (3.9)
j=k +1
The width of the time window for observing the returns sig-
nificantly affects the results of the calibration procedure and also,
consequently, the setting of appropriate rules for the identification
of migrations between classes of risk.
In fact, for the same N, and hence T, a higher (lower) value of
determines, through the equality H = N + 1 = 252T + 1 , a
88
i i
i i
i i
25
20
15
(%)
10
0
1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0
Time
89
i i
i i
i i
90
i i
i i
i i
tails of the normal distribution and therefore are rare but of relevant
size.
As mentioned above, in formulating its forecasts on the future
volatility the market has a sigma algebra that incorporates the infor-
mation recorded up to day k. By construction, these forecasts can-
not incorporate what happens in the period between day k and the
next day. This lack of information becomes more relevant if the
events occurring in this period are capable of significantly influ-
encing the volatility that will be realised on the day k + 1, and
thus creating the conditions for the occurrence of a management
failure.
However, given the low probability associated with shocks of
large magnitude, if the number of management failures is too high,
then (at least in part) it is abnormal, as it indicates that the consid-
eration of their risk budget requires the automatic asset manager to
put in place a management strategy which is anomalous compared
with the reasonable expectations of the market.
The predictive model used in this work is the diffusion limit of the
M-Garch(1,1) of Geweke, Pantula and Mihoj.7 There are two reasons
for the choice of this model.
First, the definition of prediction intervals starting from stochas-
tic differential equations can produce robust estimates of the future
volatility by using a few observations of the past values of this
financial variable.
Since solutions based on stochastic difference equations, such as
those that characterise traditional Garch models, operate in discrete
time, they provide reliable forecasts of the volatility conditioned on
having a sufficiently high number of observations. In the presence of
a few observations, however, they would pose problems of statistical
significance or of computational difficulties.
In contrast, the forecasts made by continuous-time models have
the advantage of performing well, even when working on poorer
sigma algebras, because they extrapolate the information on the
recent behaviour of the variable of interest so as to allow a prompt
revision of the width and the levels of the prediction intervals (so-
called adaptivity), and hence to exclude effects known as echoes of
the markets, defined as the persistence of wrong predictions due to
the lack of an embedded updating of the bounds of the prediction
intervals.8
91
i i
i i
i i
or equivalently
(k ) (k ) (k )
ln k2+1 = 0 + 1 ln k2 + 21 ln |Zk | (3.11)
(k ) (k ) (k )
where 0 and 1 (with 1 > 0) are two parameters whose super-
script indicates that they refer to the discrete process {ln k2 }kN and
the initial condition of this process, ie, ln 02 , is known and is equal
to l0 .
For ease of reference, Equation 3.11 is rewritten in differential
terms
(k ) (k ) (k )
ln k2+1 ln k2 = 0 + (1 1) ln k2 + 21 ln |Zk | (3.12)
given the initial condition (equal to the constant l0 ) and the fact
that {Zk }kN is a sequence of iid normal random variables with
zero mean and unit variance, the process {ln k2 }kN is a discrete
Markov chain (Karatzas and Shreve 2005) with respect to the filtra-
tion {k }kN . The pair (R, B(R)) defines the measurable space of
{ln k2 }kN , where B(R) is the Borel sigma algebra on R.
92
i i
i i
i i
k k+1
k + 0h k + 2h k + 4h k + 6h ... k + h(1/h)
k k+1
2
ln k ln k2 + 1
ln k2 + 0h ln k + 3h
2
ln k + 6h
2 ... ln k + h(1/h)
2
2
ln k ln k2 + 1
The pair (R, B(R)) defines the measurable space of the process
2
{ln kh }kh0 , where B(R) is the Borel sigma algebra on R. Each dis-
crete Markov process defined in this way is identified by the initial
93
i i
i i
i i
E([(ln |Z|)kh E((ln |Z|)kh )]3 ) = h3/2 E([ln |Zk | E(ln |Zk |)]3 )
(3.19)
94
i i
i i
i i
E([(ln |Z|)kh E((ln |Z|)kh )]3 ) = h3/2 E([ln |Zk | E(ln |Zk |)]3 )
The following are preliminary results and concepts which are use-
ful for stating the theorem of weak convergence of the M-Garch(1,1)
for the corresponding diffusion process.
Definition 3.9. The Skorokhod space is the space that goes from
[0, +[ to R with continuous paths to the right and finite paths to
the left, ie,
D = D([0, +[, R)
:= { f : ([0, +[ R) : for all t 0, f (t+ ) = f (t) and f (t ) exists}
(3.20)
95
i i
i i
i i
Then, Ph uniquely defines, with respect to the filtration {ht }t0 , the
h
jump-continuous Markov process {ln t2 }t0 associated with the
2
discrete Markov process {ln kh }kh0 , where
h
ln t2 : [0, +[ ( , ) (D, B(D)).
as
96
i i
i i
i i
h
Figure 3.4 Relation between the processes {ln kh2 }kh0 and {ln t2 }t0
0 h 2h 3h 4h 5h 6h
Time
h
clear that {ln t2 }t0 is characterised by an initial distribution equal
to p0 () and by a transition probability phs,h (, ),13 both defined on
(R, B(R)). In particular, for all B(R),
h
1. Ph (ln 02 ) = p0 ( ),
h h h
2. Ph (ln t2 | hsh ) = Ph (ln t2 | ln s2 ) = phs,h (, ) for
all t = s + h.14
Lemma 3.11. Let {(ln |Z|)ht }t0 be a sequence of iid random vari-
ables which constitute the innovation term of the discrete stochastic
process of Equation 3.27. Then, if {(ln |Z|)ht }t0 is defined as
(ln |Z|)ht = h ln |Zk | + (h h)E(ln |Zk |) (3.28)
97
i i
i i
i i
d ln t2 = (0 + 21 E(ln |Zt |) + (1 1) ln t2 ) dt
+ 2|1 | var(ln |Zt |) dWt (3.32)
2
where {Wt }t0 is a standard Brownian motion under Pln t .15
Theorem 3.12 is a specific application of the more general theo-
rem of weak convergence of discrete Markov processes to diffusion
processes (Stroock and Varadhan 1979), and, like its general form, it
relies on the validity of four conditions.
Before examining the details of these conditions in the case of
interest, it is useful to recall that, in intuitive terms, the weak con-
vergence means that, for h 0, the sequence of probability measures
2
{Ph }h>0 converges to the probability measure Pln t of the diffusion
process {ln t2 }t0 on the measurable space (R, B(R)).16
In other words, for every T, 0 T < , the probability law
h
that generates the entire sample trajectories of {ln t2 }t0 for 0
t T converges to the probability law that generates the sample
trajectories of ln t2 for 0 t T.17
The four conditions are now examined in the specific case of
h
the weak convergence of the process {ln t2 }t0 to the process
{ln t2 }t0 , the latter described by Equation 3.32.18
Condition 1 requires that the first non-central conditional moment
h
of the process {ln t2 }t0 converges for h 0 to a function which
98
i i
i i
i i
99
i i
i i
i i
and
h h
E[(ln t2+h ln t2 )3 | hth ] = (Aht )3 + 3(Aht )2 E[Bht ]
+ 3Aht E[(Bht )2 ] + E[(Bht )3 ] (3.41)
and
h h h h
E[(ln t2+h ln t2 )3 | hth ] = (E[(ln t2+h ln t2 ) | hth ])3
+ 3Aht var(Bht ) + E[(Bht )3 ] (E[Bht ])3
(3.43)
At this point, given Equation 3.39, proving Equation 3.33 is
equivalent to proving that
1 ?
lim (Aht + E[Bht ]) = 0 + 21 E(ln |Zt |) + (1 1) ln t2 (3.44)
h0 h
1 h
lim [0h + (1h h) ln t2 + 21h E(ln |Zt |)]
h0 h
?
= 0 + 21 E(ln |Zt |) + (1 1) ln t2 (3.45)
and
1 ?
lim [0h + 21h E(ln |Zt |)] = 0 + 21 E(ln |Zt |) (3.47)
h0 h
100
i i
i i
i i
1 h h
lim [(E[(ln t2+h ln t2 ) | hth ])3
h0 h
?
+ 3Aht var(Bht ) + E[(Bht )3 ] (E[Bht ])3 ] = 0 (3.50)
h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h 0 h2
83
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] (3.51)
h0 h
101
i i
i i
i i
h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h0 h2
83 ?
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] = 0 (3.52)
h0 h
In summary, Condition 1 is met if there is a sequence of parameters
{0h , 1h } such that for h 0 the following limits are verified
1 h ?
lim [(1h h) ln t2 ] = (1 1) ln t2 (3.46)
h0 h
1 ?
lim [0h + 21h E(ln |Zt |)] = 0 + 21 E(ln |Zt |) (3.47)
h0 h
42 ?
lim 21h var(ln |Zt |) = 421 var(ln |Zt |) (3.49)
h0 h
and
h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h0 h2
83 ?
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] = 0 (3.52)
h0 h
Setting 1h := h1 clearly satisfies Equations 3.46 and 3.49. More-
over, by setting 0h := h0 , Equation 3.47 is also satisfied. Finally,
substituting the values of 0h and 1h , as defined above, into the
left-hand side of Equation 3.52 gives
h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h 0 h2
83
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ]
h0 h
h 4h2 21
= lim 3(h0 + (h1 h) ln t2 ) var(ln |Zt |)
h0 h2
8h3 3
+ lim h 3 1 [E((ln |Zt |)3 ) (E(ln |Zt |))3 ]
h0 h
and simplifying
h
lim 3(h0 + (h1 h) ln t2 )421 var(ln |Zt |)
h0
+ lim h831 [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] = 0
h 0
102
i i
i i
i i
ln t2 | s
0 + 21 E(ln |Zt |)
N ln s2 + exp((1 1)(t s))
(1 1)
0 + 21 E(ln |Zt |)
;
(1 1)
(2|1 | var(ln |Zt |))2 (exp(2(1 1)(t s)) 1)
2(1 1)
(3.53)
d ln t2 = (0 + 21 E(ln |Zt |) (1 1 ) ln t2 ) dt
+ 2|1 | var(ln |Zt |) dWt (3.54)
103
i i
i i
i i
104
i i
i i
i i
and variance
exp(2(1 1)(t s)) 1
var(ln t2 | s ) = 421 var(ln |Zt |) (3.59)
2(1 1)
to which the following standard deviation corresponds
SD(ln t2 | s )
(2|1 | var(ln |Zt |))2 (exp(2(1 1)(t s)) 1)
=
(3.60)
2(1 1)
and, thus, Equation 3.53 is verified.
G t,min
1 (2|1 | var(ln |Zt |))2 (exp(2(1 1) 1))
= exp z(1+)/2
2 2(1 1)
2 0 + 21 E(ln |Zt |)
+ ln t1 + exp(1 1)
1 1
0 + 21 E(ln |Zt |)
1 1
(3.61)
and
G t,max
1 (2|1 | var(ln |Zt |))2 (exp(2(1 1) 1))
= exp z(1+)/2
2 2(1 1)
0 + 21 E(ln |Zt |)
+ ln t21 + exp(1 1)
1 1
0 + 21 E(ln |Zt |)
1 1
(3.62)
105
i i
i i
i i
where the subscript G denotes that these are the bounds of the
prediction interval calculated by using the diffusion limit of the M-
Garch(1,1).
The values of E(ln |Zt |) and var(ln |Zt |) are deterministic func-
tions of the EulerMascheroni constant, also called Euler Gamma,
whose value is the result of the following limit (Abramowitz and
Stegun 1964).
n
1
lim ln n
n k
k =1
and it is approximately 0.57721.
In particular,
E(ln |Zt |) = 0.6352
and
var(ln |Zt |) = 1.2337
Hence, the following two constants can be defined
c = 2E(ln |Zt |) = E(ln Z2t ) 1.2704 (3.63)
d = 2 var(ln |Zt |) = var(ln Z2t ) = 2.2214 (3.64)
2
which allow the next corollary to Proposition 3.14 to be stated.
Corollary 3.15. For a given confidence level , the one-day predic-
tion interval for the volatility (ie, setting s = t 1) associated with
the diffusion process of Equation 3.32 has the following lower and
upper bounds
1 exp(2(1 1)) 1
G t,min = exp z(1+)/2 d|1 |
2 2(1 1)
2 0 c 1 0 c1
+ ln t1 + exp(1 1)
(1 1) 1 1
(3.65)
and
1 exp(2(1 1)) 1
G t,max = exp z(1+)/2 d|1 |
2 2(1 1)
0 c 1 0 c1
+ ln t21 + exp(1 1)
1 1 1 1
(3.66)
106
i i
i i
i i
Once the functional form of the one-day prediction interval for the
volatility is known (ie, for the volatility that will take place on day
t), in order to determine the actual numerical value of the bounds of
this interval it is necessary to know the value of the volatility at day
s = t 1 and the estimates of the parameters 0 and 1 .
(k ) (k ) (k )
ln k2+1 = 0 + 1 ln k2 + 1 ln Z2k (3.10)
or, equivalently, by
(k ) (k ) (k )
ln k2+1 = 0 + 1 ln k2 + 21 ln |Zk | (3.11)
d ln t2 = (0 + 21 E(ln |Zt |) + (1 1) ln t2 ) dt
+ 2|1 | var(ln |Zt |) dWt (3.32)
Then, the parameters of these two processes are linked by the fol-
lowing relations
(k ) exp(2(1 1)) 1
1 = |1 | (3.67)
2(1 1)
107
i i
i i
i i
(k ) exp(2(1 1)) 1
0 = 2|1 | E(ln |Zk |)
2(1 1)
exp(2(1 1)) 1
|1 | ln k2
2(1 1)
+ exp(1 1) ln k2
108
i i
i i
i i
(k ) exp(2(1 1) 1)
(1 )2 = 21
2(1 1)
(k ) (k ) (k )
0 + 1 ln k2 + 21 E(ln |Zk |)
(0 + 21 E(ln |Zk |))(exp(1 1) 1)
= exp(1 1) ln k2 +
1 1
and hence
(k ) exp(2(1 1)) 1
1 = |1 | (3.73)
2(1 1)
It follows that
(k ) exp(2(1 1)) 1
0 + |1 | ln k2
2(1 1)
exp(2(1 1)) 1
+ 2|1 | E(ln |Zk |)
2(1 1)
= exp(1 1) ln k2
(0 + 21 E(ln |Zk |))(exp(1 1) 1)
+
1 1
(k )
which making explicit for 0 yields
(k ) exp(2(1 1)) 1
0 = 2|1 | E(ln |Zk |)
2(1 1)
exp(2(1 1)) 1
|1 | ln k2
2(1 1)
+ exp(1 1) ln k2
109
i i
i i
i i
1 1 2(1 1)
exp
2 |1 | exp(2(1 1)) 1
exp(1 1) 1
Yk+1 (0 c1 )
1 1
exp(2(1 1)) 1
c|1 |
2(1 1)
(exp(1 1) 1) ln k2
(3.74)
where
Yk+1 = ln k2+1 ln k2 (3.75)
Proof Using Equations 3.67 and 3.68, Equation 3.11 can be expressed
in terms of the parameters 0 and 1 of the corresponding diffusion
limit, ie
exp(1 1) 1
ln k2+1 = (0 + 21 E(ln |Zk |))
1 1
exp(2(1 1)) 1
2|1 | E(ln |Zk |)
2(1 1)
110
i i
i i
i i
exp(2(1 1)) 1
|1 | ln k2 + exp(1 1) ln k2
2(1 1)
exp(2(1 1)) 1
+ |1 | ln k2
2(1 1)
exp(2(1 1)) 1
+ 2|1 | ln |Zk |
2(1 1)
which simplifies to
exp(1 1) 1
ln k2+1 = (0 + 21 E(ln |Zk |))
1 1
exp(2(1 1)) 1
2|1 | E(ln |Zk |)
2(1 1)
+ exp(1 1) ln k2
exp(2(1 1)) 1
+ 2|1 | ln |Zk |
2(1 1)
Expressing this in differential form yields
exp(1 1) 1
ln k2+1 ln k2 = (0 + 21 E(ln |Zk |))
1 1
exp(2(1 1)) 1
2|1 | E(ln |Zk |)
2(1 1)
+ (exp(1 1) 1) ln k2
exp(2(1 1)) 1
+ 2|1 | ln |Zk | (3.76)
2(1 1)
By
Yk+1 = ln k2+1 ln k2 (3.75)
111
i i
i i
i i
Since the random variables of the sequence {ln |Zk |}kN are iid
and, for notational simplicity setting
:= g(|Zk |) = ln |Zk | (3.78)
where g(x) = ln x, Equation 3.77 becomes
exp(1 1) 1 exp(2(1 1)) 1
Yk+1 = (0 c1 ) + c|1 |
1 1 2(1 1)
exp(2(1 1)) 1
+ (exp(1 1) 1) ln k2 + 2|1 |
2(1 1)
(3.79)
By applying a well-known theorem on the functions of a random
variable, the density function of is determined to be
2 2 e2
f () = exp (3.80)
2 2
By applying the same theorem again, the density function of Yk+1
is also determined
fYk+1 (Yk+1 )
1 2(1 1)
=
2 |1 | exp(2(1 1)) 1
1 2(1 1)
exp
2|1 | exp(2(1 1)) 1
exp(1 1) 1
Yk+1 (0 c1 )
1 1
exp(2(1 1)) 1
c|1 |
2(1 1)
2
(exp(1 1) 1) ln k
1 1 2(1 1)
2 exp
|1 | exp(2(1 1)) 1
exp(1 1) 1
Yk+1 (0 c1 )
1 1
exp(2(1 1)) 1
c|1 |
2(1 1)
(exp(1 1) 1) ln k2 (3.81)
112
i i
i i
i i
1 1 2(1 1)
exp
2 |1 | exp(2(1 1)) 1
exp(1 1) 1
Yk+1 (0 c1 )
1 1
exp(2(1 1)) 1
c|1 |
2(1 1)
(exp(1 1) 1) ln k2
(3.74)
113
i i
i i
i i
exp(1 1) 1
k+1 (0 , 1 ) = Yk+1 (0 c1 )
1 1
exp(2(1 1)) 1
c|1 |
2(1 1)
(exp(1 1) 1) ln k2 (3.84)
exp(2(1 1)) 1
f (1 ) = |1 | (3.85)
2(1 1)
Proof Substituting the left-hand side of Equation 3.84 into the right-
hand side of Equation 3.74 gives
L(Y; 0 , 1 )
1
K
1 2(1 1)
=
k =1
2 | 1 | exp(2 (1 1)) 1
1 2(1 1)
exp k+1 (0 , 1 )
2|1 | exp(2(1 1)) 1
1 2(1 1)
12 exp k+1 (0 , 1 )
|1 | exp(2(1 1)) 1
(3.86)
By substituting the left-hand side of Equation 3.85 into the right-
hand side of Equation 3.86, after a little algebra, it follows that
L(Y; 0 , 1 )
K 1
1
=
2 f (1 )
K 1
1 k+1 (0 , 1 ) k+1 (0 , 1 )
exp exp
2 k=1 f (1 ) f (1 )
(3.87)
114
i i
i i
i i
Finally, substituting the left-hand side of Equation 3.83 into the right-
hand side of Equation 3.87 yields
1 1 K 1
L(Y; 0 , 1 ) = exp( 12 (0 , 1 )) (3.82)
2 f (1 )
115
i i
i i
i i
Ok = ln(k2 ) (3.93)
Ok+1 exp(1 1)Ok
Ak+1 (1 ) = (3.94)
f (1 )
1 (1 )
B(1 ) = c c (3.95)
f (1 )
(1 )
C(1 ) = (3.96)
f (1 )
K 1
1
A(1 ) = Ak+1 (1 ) (3.97)
K 1 k=1
min (0 , 1 ) (3.99)
0
where (0 , 1 ) is defined by
1
K
k+1 (0 , 1 ) k+1 (0 , 1 )
(0 , 1 ) = exp (3.83)
k =1
f (1 ) f (1 )
First, it has to be proved that the problem 3.99 admits at least one
solution.
116
i i
i i
i i
K 1
(0 , 1 ) k (0 , 1 ) k (0 , 1 )
= exp
0 k =1
f (1 ) 0 f (1 )
K 1
k (0 , 1 )
(3.105)
k =1
0 f (1 )
117
i i
i i
i i
118
i i
i i
i i
K 1
e1
M(Y; (1 ), 1 ) = ( f (1 ))2 exp(Aj+1 (1 ) A(1 ))
K 1 j=1
(3.110)
where
K 1
1
A(1 ) = Ak+1 (1 ) (3.97)
K 1 k=1
Proof By assumption
0 (1 )
K 1
1 1
0 B(1 ) + ln exp(Aj+1 (1 )) (3.111)
C(1 ) K 1 j=1
K 1
k+1 ((1 ), 1 ) 1
= Ak+1 (1 ) ln exp(Aj+1 (1 ))
f (1 ) K 1 j=1
(3.112)
From Equation 3.112 after a little algebra it follows that
1
K
k+1 ((1 ), 1 )
exp =K1 (3.113)
k =1
f (1 )
119
i i
i i
i i
((1 ), 1 )
1
K K 1
1
= (K 1) Ak+1 (1 ) ln exp(Aj+1 (1 ))
k =1
K 1 j=1
K 1 K 1 exp(A ( ))
j=1 j+1 1
= ( K 1) Ak+1 (1 ) + (K 1) ln
k =1
K 1
((1 ), 1 )
K 1 K 1
1 1
= (K 1 ) 1 Ak+1 (1 ) + ln exp(Aj+1 (1 ))
K 1 k=1 K 1 j=1
(3.114)
K 1
exp(A(1 ))
((1 ), 1 ) = (K 1) 1 + ln exp(Aj+1 (1 ))
K1 j=1
(3.115)
which can be re-expressed as
K 1
1
((1 ), 1 ) = (K 1) 1 + ln exp(Aj+1 (1 ) A(1 ))
K 1 j=1
(3.116)
Finally, from Equation 3.89, Proposition 3.20 and Equation 3.116 it
follows that
M(Y; (1 ), 1 )
(K 1 )
= ( f (1 ))2 exp
(K 1)
K 1
1
1 + ln exp(Aj+1 (1 ) A(1 ))
K 1 j=1
(3.117)
120
i i
i i
i i
and hence
M(Y; (1 ), 1 )
K 1
e1
= ( f (1 ))2 exp(Aj+1 (1 ) A(1 )) (3.110)
K 1 j=1
121
i i
i i
i i
and finally
0 = c1
K 1
1 1
+ A(1 ) + ln exp(Ak+1 (1 ) A(1 )) c
C(1 ) K 1 k=1
(3.119)
122
i i
i i
i i
Figure 3.5 Volatility and its predictive band based on the diffusion limit
of the M-Garch(1,1)
Realised volatility
Upper bound of the prediction interval
Lower bound of the prediction interval
12.0
11.5
(%)
11.0
10.5
1.0 1.2 1.4 1.6 1.8 2.0
Time
123
i i
i i
i i
124
i i
i i
i i
4% in the second one. This situation indicates that the widths of the
two intervals are not properly calibrated.
The numbers in this example signal that (for reasons which will
become clear later) the width of the first interval is excessive, while
the second interval is too narrow. Recalling from Section 3.2 that the
model for the automatic asset manager assumes the use of the whole
predetermined risk budget, it is clear that the player who operates on
the first interval has at their disposal a boundless scope in deciding
inter-temporal asset allocation, while for the player who operates
on the second interval the space of possible management choices
may be too limited. In the real world, the effect of this imbalance
between the two intervals is likely to be a systematic selection of the
first interval and the consequent disappearance of the second one.
The example above shows the fundamental aspect of the concept
of market feasibility. Even if, for ease of illustration, so far the mar-
ket feasibility has been often presented as a property of the single
volatility interval, in reality it is a requirement that becomes mean-
ingful (and, therefore, must be verified) for each interval only in
comparison with the whole set of volatility intervals in a grid.
An incidence of management failures equal to 4% may seem low
enough to be compatible with the market feasibility of the second
interval, but it cannot be assessed independently. And, in fact, a
comparison with that of the first interval shows that, overall, the grid
does not realise a market-feasible partition of the space of possible
volatilities.
This aids a better understanding of why in the optimal grid the
management failures must be more or less the same for all intervals.
In fact, if the intervals of a grid had a different number of man-
agement failures, the grid would be not optimal, even if each single
interval was market feasible.
The modelling assumptions on the evolution of the volatility of a
hypothetical product managed by an automatic asset manager imply
that, in the simulation of the volatility realised by products with
different risk budgets, the market shocks have identical arrival times
and are filtered according to these different risk budgets.
Consequently, different incidences of management failures across
these products (even though all failures are apparently contained)
show a different sensitivity by the corresponding risk budgets to
changes in market conditions: some will register an overreaction
125
i i
i i
i i
(3.122)
126
i i
i i
i i
(n )
where each element (1 , . . . , n1 ) [0,+[ partitions [0, +[ into
n risk budgets, that is, into n consecutive volatility intervals, ie
where
(i, j)
k+1 is the volatility realised at day k + 1,
(i, j) (i, j)
G k +1,minand G k+1,max are the bounds of the prediction
interval for the volatility at k + 1.
127
i i
i i
i i
(n)
Proposition 3.30. Given an element (1 , . . . , n1 ) [0,+[ , the
total number of management failures occurring in correspondence
of each volatility interval is denoted by mf ( j) , j = 1, . . . , n, and is
equal to
m
N
mf ( j) = 1 { (i, j) < (i, j) (i,) (i, j) (3.126)
k +1 G k +1,min } {k +1 >G k +1,max }
i=1 k +1=+K
(n)
Proposition 3.31. Given an element (1 , . . . , n1 ) [0,+[ , the
percentage of management failures occurring in correspondence of
( j)
each volatility interval is denoted by mf , j = 1, . . . , n, and is equal
to
( j) mf ( j)
mf = (3.127)
mM
The calibration of the intervals into which to divide a given space
of possible volatilities requires the solution of a stochastic non-linear
programming problem that depends both on the number of intervals
and on the extremes that identify them. In fact, the market feasibility
must be sought simultaneously for an entire n-tuple of risk budgets
in the perspective of finding that value of n and that vector of volatil-
ities which meet the requirements set forth in points (1) and (2) of
Section 3.5, namely the exclusion of an abnormal number of man-
agement failures for each interval and the substantial equality in the
number of failures for all intervals.
At first glance, it seems that the two conditions require the solu-
tion of two different problems: the first is an optimisation problem
that requires minimising a suitable monotonically increasing func-
tion of the management failures of all intervals, while the second
is a problem of constrained nature which requires finding the grid
that, except for some marginal differences, equals the number of
management failures for all volatility intervals within it.
However, the close interconnection and equivalence of the two
problems will be shown. This can be anticipated in the light of the
arguments set out in Section 3.5.
For any n, since the intervals dividing the space of the possible
volatilities are consecutive, they all have one extreme in common.
As the management failures of any interval are directly proportional
to its width (as explained in Section 3.5.2), attention must be paid to
the fact that the revision of an interval is reflected, all things being
equal, on its two neighbours.22
128
i i
i i
i i
129
i i
i i
i i
The various steps for the calibration of the optimal grid are given
in the following sections.
To this end, some basic results are presented in Section 3.5.2. These
results are then used to identify a non-abnormal homogeneous grid
on a reduced space of volatilities which is strictly contained in the
space [0, +[ (Section 3.5.3). Finally, Section 3.5.4 returns to the
entire space of possible volatilities, and shows how, in order for it to
admit a grid with the same properties as that obtained on the reduced
space, the first and the last interval should be determined accord-
ing to criteria independent of any consideration on the incidence
of management failures. Once these two intervals have been suit-
ably determined, the identification of the grid will be closely related
130
i i
i i
i i
where
C D
= = (3.134)
A B
131
i i
i i
i i
or, equivalently
C = A
(3.135)
D = B
[C ,D ] = [A ,B ] (3.138)
[C ,D ] = 2 [A ,B ] (3.139)
132
i i
i i
i i
[C2 , D2 ] = [ 2 A2 , 2 B2 ]
133
i i
i i
i i
and
(i,[C ,D ]) (i,[ ,D ]) (i,[ ,D ]) (i,[ ,B ]) (i,[C ,D ])
S tk Stk Ct rStk Ct t +tk tA Stk t ttk
134
i i
i i
i i
(i,[C ,D ]) (i,[ ,B ])
R tk tk tA ttk
[A ,B ] = [C ,D ] (3.132)
135
i i
i i
i i
Moreover
Ok , Ak+1 (1 ), A(1 ) and Q(Y; 1 ) are used to denote the quan-
tities expressed respectively by Equations 3.93, 3.94, 3.97 and
by Equation 3.118 referring to the risk budget [A , B ],
Equation 3.134 implies that
[C , D ] = [A , B ] (3.160)
so that:
L() (Y; 0 , 1 )
136
i i
i i
i i
() 1 exp(1 1)
Ak+1 (1 ) Ak+1 (1 ) + ln( 2 ) (3.167)
f (1 )
Substituting the right-hand side of Equation 3.167 into the right-
hand side of Equation 3.163 and applying Equation 3.97 leads to
(1 exp(1 1))
A() (1 ) A(1 ) + ln( 2 ) (3.168)
f (1 )
In addition, from Equations 3.167 and 3.168 it follows that
()
Ak+1 (1 ) A() (1 ) Ak+1 (1 ) A(1 ) (3.169)
which clearly implies that, except for marginal differences, the point
()
of minimum 1 of the function Q() (Y; 1 ) coincides with the point
of minimum 1 of the function Q(Y; 1 ), ie
()
1 1 (3.170)
()
Similarly to Equation 3.119, the point of minimum 0 of the
function M() (Y; 0 , 1 ) is defined by
() () 1
0 = c1 + ()
C(1 )
()
A() (1 )
K 1
1 () () ()
+ ln exp(Ak+1 (1 ) A() (1 )) c
K 1 k=1
(3.171)
which, by exploiting Equations 3.168, 3.169 and 3.170, becomes
() 1
0 c1 +
C(1 )
K 1
1
A(1 ) + ln exp(Ak+1 (1 ) A(1 )) c
K 1 k=1
1 1 exp(1 1)
+ ln( 2 )
C(1 ) f (1 )
137
i i
i i
i i
where, by Equation 3.119, the first two terms of the right-hand side
correspond to 0 and, thus
() 1 1 exp(1 1)
0 0 + ln( 2 ) (3.172)
C(1 ) f (1 )
which, recalling Equations 3.92 and 3.96, becomes
()
0 0 + (1 1 ) ln( 2 ) (3.173)
By making the appropriate substitutions in Equation 3.53, it
follows that
with regard to the risk budget [A , B ]
ln k2+1 | k N ( ; SD) (3.174)
where
= exp(1 1) ln k2 + (1 )(0 c1 ) (3.175)
SD = f (1 )d (3.176)
138
i i
i i
i i
Proof The critical arguments for the proof are contained in the proof
of Theorem 3.36.
139
i i
i i
i i
(3.184)
(n )
where each element (1 , . . . , n1 ) [0 ,n ] partitions [0 , n ] into
n risk budgets, that is, into n consecutive volatility intervals, ie
[0 , 1 ], [1 , 2 ], . . . , [n1 , n ] (3.185)
140
i i
i i
i i
which implies
() ()
max [l ,l+1 ]
= (3.194)
l=0,1,...,n1
which leads to
1
n n
[b ,b+1 ] max [l ,l+1 ] (3.196)
l=0,1,...,n1
b=0
141
i i
i i
i i
which implies
max [l ,l+1 ] (3.199)
l=0,1,...,n1
and the comparison between Equations 3.194 and 3.199 returns the
first part of the thesis, ie
() ()
max [l ,l+1 ]
max [l ,l+1 ] (3.191)
l=0,1,...,n1 l=0,1,...,n1
and therefore
< max [l ,l+1 ] (3.204)
l=0,1,...,n1
142
i i
i i
i i
143
i i
i i
i i
that is
([l ,l+1 ]) ([l ,l+1 ])
mf = max mf (3.213)
l=0,1,...,n1
where
()
0 = 0 and n() = n
are related by
()
l = l 0 for all l = 0, 1, . . . , n (3.190)
144
i i
i i
i i
145
i i
i i
i i
annual percentage loss26 between 30% and 50% of the invested cap-
ital depending on the levels and the volatility of the risk-free rates
and on the confidence level used.
With regard to 1 , whatever the optimal n, the requirement of an
increasing absolute width of the intervals suggests that this value
should be chosen consistently with an absolute width tendentially
limited to the first interval. At the same time, the value chosen for 1
must ensure the representativeness of the first interval, even consid-
ering that it is associated with the lowest degree of risk and that this
interval must contain the volatilities empirically observed in typi-
cal low-risk products as money market funds. A value of 1 equal
to 0.25% is in line with these requirements, because it ensures that
the first interval is fairly narrow but representative of a risk budget
realistic for safest products, as it results from market data on money
market funds and similar investments.
Once 1 and n1 are chosen, and given the requirement of con-
stant relative width for all intermediate intervals, for determining
the optimal grid on the space of volatilities [0, +[ it is sufficient to
identify the optimal number of intervals, n .
The ultimate goal of the entire methodology for the calibration
of the grid is to provide an indication of the overall riskiness of a
non-equity product through the mapping of the volatility intervals
into a qualitative scale composed of highly informative adjectives
for the average investor.
In this context, the total number of intervals must first reach a
good compromise between the complexity of the phenomenon to
be represented (and, therefore, the level of detail of the information
offered) and the need for a clear and immediate understanding by
investors who are interested in grasping the differences in the degree
of risk across products.
These considerations suggest it would be wise to put a constraint
on the minimum and maximum number of intervals of the optimal
grid. In particular, it seems reasonable that the number of risk classes
is between five and seven.
A grid with fewer than five classes would convey too little infor-
mation and, by combining in the same class products with hetero-
geneous risk profiles, would prevent the investor from identifying
those investment proposals that are actually in line with their risk
appetite.
146
i i
i i
i i
Volatility
intervals (%)
Risk classes min max
147
i i
i i
i i
and a reliable model that forecasts the future behaviour of this finan-
cial variable by using the distributive properties of the limit diffusion
of the M-Garch(1,1).
To ensure the accuracy and comparability of information to
investors, the methodological assumptions about the depth of the
observation window of the returns and their sampling frequency
should guide the procedures for determining the risk class of newly
issued products and for its possible revisions over time.
In fact, the methodological assumptions contribute, together with
the characteristics of financial engineering, to the definition of crite-
ria of general validity for a proper classification of the initial degree
of risk of any non-equity product.
The classification is immediate in the case of products pursuing a
target risk that will inspire their investment policy and management
techniques. Identifying the degree of risk is sufficient to express this
target in terms of a point value (or a range) of annualised volatility
associated with the possible daily returns of the product, and to see
where this target is placed among the seven intervals that comprise
the optimal grid.
It should be stressed that, for this type of product, in the absence
of data on historical performance (and thus also on the volatility
actually realised in the past) solutions based on simulation of the
volatility of their potential returns would produce values consistent
with the preset target risk, given the invariance properties of the
main continuous-time stochastic models that are used in practice to
implement this kind of simulation.
The simulative solution becomes useful in the particular case in
which the target risk of the product identifies a range of volatility
values that are partially overlapped to two or more intervals of the
optimal grid.
Above all, however, simulative solutions are of paramount impor-
tance in determining the initial risk class of benchmark and return-
target products.
In benchmark products these solutions require the development
in advance of specific stochastic volatility models calibrated on the
term structure of the volatilities of the benchmark adopted. In addi-
tion, where an active management style is provided (and not the pure
replication of the benchmark), models must be properly integrated
to reflect more or less important deviations from the benchmark that
148
i i
i i
i i
120
110
100
90
Bond value
80
70
60
50
40
30
20
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Time
149
i i
i i
i i
150
i i
i i
i i
and
1
K K
1
1{ ( j) ( j) = max 1{ (h ) (h ) (3.218)
k l [min ,max ]} h=1,2,...,7 k l [min ,max ]}
l=0 hi l=0
where
k l
252
kl =
(Rs Rkl )2
s=kl+1
Rs is the logarithmic return realised by a non-equity product
at the day s;
k l
1
Rkl = Rs
s=k l+1
= 252.
151
i i
i i
i i
in relation to the pattern of the products volatility (ie, they are not
path dependent).
Consequently, the violation of any of these intervals can be con-
sidered persistent if it lasts for a sufficiently long period of time. In
particular, three months is an appropriate time reference, since, com-
pared with the prediction intervals obtained from Garch diffusions,
the intervals of the optimal grid have constant and non-adaptive
width (which increases the chances of exceeding their extremes),
but they are also wider in order to allow a reasonable leeway in
handling the ordinary activity of an asset manager or ordinary and
temporary movements of the reference markets of the product.
Due to the consistency with the assumptions used in the calibra-
tion of the optimal grid, the concept of migration given by Defini-
tion 3.43 constantly ensures the timely updating of the information
on the degree of risk.
A time rule of less than three months could entail, all things being
equal, an excessive number of migrations, many of them being spu-
rious because they are not attributable to stable changes in the risk
profile of the product. This helps to prevent fictitious cases of insta-
bility of the degree of risk and, therefore, to exclude an informative
set that is barely reliable and of little use to investors and that could
also create some difficulty for normal asset-management activity. On
the other hand, an observation period for the volatility longer than
three months would result in a tendential inertia in the risk mea-
surement and representation, again invalidating the significance of
the message conveyed to investors.
For similar reasons, in Definition 3.43 the data is calculated using
the volatility of daily returns realised over the last year (ie, = 252).
The adoption of a larger basis (eg, weekly or even monthly) for
calculating returns would produce a volatility smoothing, that is,
a reduction of its variability and the containment of this variable
within narrower margins. Similarly, a time window for observing
returns longer than one year would also favour a smoothing of the
data on volatility, because information on the recent dynamics of the
products risk would be mediated (and thus partially compensated)
by that further back in time. The net effect would be a delay in
detecting cases of migration and, therefore, in updating the degree
of risk with a systematic underestimation or overestimation of the
actual risk of products.
152
i i
i i
i i
Figure 3.7 shows the migrations of the equity index Standard &
Poors 500 that occurred over the period January 2001January 2011
as determined in line with Definition 3.43.
Figure 3.8 shows the migrations of the same equity index deter-
mined according to a grid different from the one reported in Fig-
ure 3.1 and thus compliant with neither the requirements of non-
abnormality and homogeneity of the management failures nor that
of an increasing absolute width of the volatility intervals (eg, the
fourth and the fifth intervals are equally wide). Each value of annu-
alised volatility is obtained from the weekly returns of the index over
the last five years. Migrations occur when the volatility lies outside
the original interval for four consecutive months.
The areas of different colours in the two figures indicate the inclu-
sion in a different risk class, and, thus, every alternation of colours
corresponds to a migration event.
As expected, the two figures offer quite different representations
of the evolution of the indexs risk profile over the analysed time
horizon.
Figure 3.7 emphasises that over the entire period the degree of risk
of the index has been remarkable: the volatility has always been in
the last two risk classes (sixth and seventh), which, as in Table 3.1, are
labelled respectively with the adjectives high and very high. In
particular, there is a clear and direct correspondence across the clus-
ters of high variable returns and the times in which the associated
annualised volatility reached its upwards peaks. On the contrary,
when daily returns showed a pretty regular and essentially flat pat-
tern, the volatility values were lower (typically around 10%), even
if always consistent with a high degree of risk.
It is evident that the combination of the optimal grid with a suit-
able rule for detecting migrations definitely allows the prompt iden-
tification of the alternation of different volatility regimes and, hence,
also the effective movements across qualitative classes, ensuring the
persisting meaningfulness of the information about the degree of
risk.
Specifically, Figure 3.7 reveals that four migrations occurred over
a period of nearly 10 years, meaning, on average, around two migra-
tions every five years, which is also in line with the regulatory prac-
tices adopted in many countries which require an annual update of
information provided by precontractual disclosure documentation.
153
i i
i i
i i
Figure 3.7 Migrations of the Standard & Poors 500 with respect to the
optimal grid (January 2001January 2011)
(a)
1,500
1,000
500
(b)
10
% 0
10
(c)
40
25
%
10
4
Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
154
i i
i i
i i
Figure 3.8 Migrations of the Standard & Poors 500 with respect to a
non-optimal grid (January 2001January 2011)
(a)
1,500
1,000
500
(b)
10
% 0
10
(c)
40
% 25
15
10
Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
155
i i
i i
i i
Despite the fact that the two grids have the same number of risk
classes, the intervals mapped into these classes are quite heteroge-
neous. Focusing just on the fifth and sixth classes, the optimal grid
gives
5th class 410%
6th class 1025%
while the grid behind Figure 3.8 gives
5th class 1015%
6th class 1525%
Recalling that the intervals of the above grid are identified by val-
ues of annualised volatilities of weekly returns, while the intervals of
the optimal grid are identified by values of annualised volatilities of
daily returns, it is interesting to observe that the upper extreme of the
fifth class of the optimal grid coincides with the lower extreme of the
same class in the alternative grid. This implies material differences
in the risk attribution and also in detecting migrations. Specifically,
the latter grid requires much higher volatilities than the former to
fit a product in the upper risk classes and it is reasonable to guess
that the first four classes are also identified according to a similar
criterion, which is likely to favour the concentration of the prod-
ucts in the first classes, namely those which represent relatively safe
investment proposals.
156
i i
i i
i i
157
i i
i i
i i
1 The recommended investment time horizon is the third pillar of the risk-based approach and
the methodology for its determination will be presented in Chapter 4.
2 This issue is addressed explicitly for the second pillar since, for the other two pillars of the risk-
based approach, the criteria according to which one can determine whether an information
update is necessary are clear.
3 The value of v which satisfies Equation 3.4 is determined numerically through a root solving
routine available in most statistical software.
158
i i
i i
i i
4 The risk-free rate is assumed to be constant. It is worth pointing out that, by using stochastic
models of the term structure of interest rates, the optimal grid does not undergo significant
changes.
5 For the sake of simplicity no correlation is assumed, ie, = 0. However, specific sensitivity
analysis highlighted that the outcome of the calibration procedure is almost invariant with
respect to the value assigned to this parameter.
6 In particular, the use of overly long datasets means that the latest information on develop-
ments in the riskiness of the products are averaged with those further back in time, resulting
in a smoothing of the volatility value. For more details about this see Section 3.7.
7 It is an additive model in the logarithm of the variance. See Geweke (1986), Pantula (1986)
and Mihoj (1987).
8 A similar approach inspired the model for market abuse detection developed by Minenna
(2003).
9 The subscripts denote that the Markov chain moves from k with a time interval of 1.
10 By Equation 3.12 ln k2+1 is k -measurable, which requires conditioning with respect to the
sigma algebra k1 .
11 The subscripts denote that the Markov chain moves from kh with a time interval of h.
12 By Equation 3.14 ln (2k+1)h is kh -measurable, which requires conditioning with respect to
the sigma algebra (k1)h .
13 The subscripts denote that the Markov chain moves from s with a time interval of h.
h
14 As ln t2 is hs -measurable, it is necessary to condition it with respect to the sigma algebra
hsh .
15 Unlike the model for the simulation of the hypothetical product managed by an automatic
asset manager which is defined under the risk-neutral measure Q, the prediction made using
the diffusion limit of the M-Garch(1,1) is, by construction, a forecast issued under the real-
world measure, ie, the measure under which each simulated trajectory of volatility is consid-
ered when it becomes the trend realised by the past volatility, with respect to which predictions
on the future values of this variable have to be made.
2
16 In equivalent terms, the probability measure Pln t is the only solution to the martingale
problem defined by the coefficients (0 + 21 E(ln |Zt |) + (1 1) ln t2 ) and 421 var(ln |Zt |)
and by the initial condition ln 02 = l0 . For more details see Stroock and Varadhan (1979).
17 As pointed out by Nelson (1990), the weak convergence implies, for example, that, given any
sequence of times t1 , t2 , . . . , tN > 0, the joint probability distribution of
h h h
{ln t21 , ln t22 , . . . , ln t2N }
159
i i
i i
i i
volatility model referred to in Section 3.3 results in a more or less significant violation of
the chosen confidence level. The numerical tests showed that, across all eligible Ks (namely
21, 42, 63, . . . , 126) such violations are minimal for K = 63.
22 More specifically, the revision of the first or the last interval influences only one other interval
of the grid, while interventions on any of the intermediate intervals necessarily affect two
other intervals.
23 The proof that if Equation 3.192 holds then Equation 3.191 must be an equality is immediate.
25 For any n, the relative width of the first interval is 1 /0 = + and the relative width of the
last interval is +/n1 = +.
26 The one-year time horizon for the calculation of the loss is due to the fact that the intervals of
the optimal grid are expressed in terms of annualised volatility.
27 The numbers in this figure have been rounded.
28 As seen in Section 3.5.4, these requirements only belong to the intermediate intervals and not
the first and seventh volatility intervals, which, compared with the others, are less exposed to
migration risk because the volatility can vary only upwards (in the case of the first interval)
and downwards (in the case of the last interval).
160
i i
i i
i i
161
i i
i i
i i
162
i i
i i
i i
163
i i
i i
i i
S0 = NC ic (4.2)
and where
ic > 0 are the initial costs charged,
r is the risk-free rate,
rc denotes the constant running costs taken on a continuous-
time basis,
> 0 is the volatility of the product,
{Wt }t0 is a standard Brownian motion under Q.
164
i i
i i
i i
165
i i
i i
i i
obtained that allow the time horizon and the associated level of sig-
nificance (expressed in terms of cumulated probability) to be ascer-
tained as minimum values that depend on the cost regime and on
the volatility of any product and that satisfy the principle more
volatility, more time.
166
i i
i i
i i
167
i i
i i
i i
Hence
lim Q(St NC) = 0 (4.10)
By direct calculation
(r rc 12 2 )t ln[NC /S0 ]
lim Q(St NC) = lim N
t0 t0 t
= N [] (4.11)
Hence
lim Q(St NC) = 0 (4.12)
t0
By direct calculation
1
(r rc 2 2 )t ln[NC /S0 ]
lim Q(St NC) = lim N
t t t
(r rc 12 2 ) t
= lim N
t
What happens to the limit depends on the sign of the quantity
(r rc 12 2 ).
168
i i
i i
i i
Figure 4.1 Plot of the function Q(St NC) with respect to time t
(ic = 2%, r rc = 3.5%)
1.0
0.9
= 5%
0.8 = 15%
= 25%
0.7 = 35%
= 45%
Q(St NC)
0.6
0.5
0.4
0.3
0.2
0.1
0
0 10 20 30 40 50 60 70 80 90 100
t (years)
If r rc 12 2 > 0, then
lim Q(St NC) = N []
t
So
lim Q(St NC) = 1 for r rc 21 2 > 0 (4.13)
t
If r rc 12 2 = 0, then
lim Q(St NC) = N [0]
t
So
1
lim Q(St NC) = 2 for r rc 12 2 = 0 (4.14)
t
If r rc 12 2 < 0, then
lim Q(St NC) = lim Q(St NC) = N []
t t
So
lim Q(St NC) = 0 for r rc 21 2 < 0 (4.15)
t
169
i i
i i
i i
(r rc 12 2 )t ln[NC /S0 ]
N
t t
1
(r rc 2 2 )t ln[NC /S0 ]
= N
t
r rc 12 2 ln[NC /S0 ]
+ (4.16)
2 t 2 t 3
1
(r rc 2 2 )t ln[NC /S0 ]
N
t t
(r rc 12 2 )t ln[NC /S0 ]
= N
t
1
(r rc 2 2 )t ln[NC /S0 ]
t t
1 2
(r rc 2 )t ln[NC /S0 ]
= N
t
r rc ln[NC /S0 ]
+ (4.17)
2 t 4 t 2 t 3
170
i i
i i
i i
Corollary 4.6. The sign of the first partial derivative of the cumula-
tive probability, as expressed in the right-hand side of Equation 4.5,
with respect to t is characterised as follows
(r rc 12 2 )t ln[NC /S0 ]
N >0
t t
for (r rc 12 2 ) 0 (4.18)
1
(r rc 2 2 )t ln[NC /S0 ]
N >0
t t
ln[NC /S0 ]
for r rc 12 2 < 0, t < (4.19)
r rc 12 2
1
(r rc 2 2 )t ln[NC /S0 ]
N 0
t t
ln[NC /S0 ]
for r rc 12 2 < 0, t (4.20)
r rc 12 2
171
i i
i i
i i
172
i i
i i
i i
173
i i
i i
i i
Q(S,NC t) (4.26)
174
i i
i i
i i
175
i i
i i
i i
Figure 4.2 Plot of the function Q(S,NC t) with respect to the time t
(ic = 2%, r rc = 3.5%)
1.0
0.9
= 5.15%
0.8
Q(S,NC t) 0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
t (years)
176
i i
i i
i i
177
i i
i i
i i
of this section will provide some useful practical advice for building
a procedure aimed at determining the minimum recommended time
horizon in a discrete-timediscrete-volatility environment.
178
i i
i i
i i
Remark 4.8. From the previous definition and from the continuity
of the trajectories of the Brownian motion, the following equality
holds
WW,y = y (4.28)
Definition 4.9. The process of the maximum of a standard Brownian
motion is defined as
Mt = max Ws (4.29)
s[0,t]
179
i i
i i
i i
and therefore it is
g(Z1 ) dP = g(Z2 ) dP (4.34)
{Z1 y, Mt y} {Z2 y, Mt y}
The event {Mt y} is the disjoint union of the two events {Wt
y, Mt y} and {Wt > y, Mt y}. Hence,
P(Wt y, Mt y) = P(2y Wt y, Mt y)
180
i i
i i
i i
Hence
P(W,y t) = P(Wt y) + P(Wt > y)
As P(Wt = y) = 0, the previous equation becomes
Yt = t + WtP (4.40)
where > 0 and {WtP }t0 is the standard Brownian motion on the
same probability space.
Definition 4.14. Let y > 0. The first-passage time of the arithmetic
Brownian motion for the barrier y is defined as
Yt = t + WtP (4.43)
where
= (4.44)
181
i i
i i
i i
can be re-expressed as
P(Yt y, Mt y) = exp( Yt 12 t 2 ) dP
{Yt y, Mt y}
182
i i
i i
i i
The event {Mt y} is the disjoint union of the two events {Yt
y, Mt y} and {Yt > y, Mt y}. Hence,
183
i i
i i
i i
and hence
P(Y,y t) = P(Y,y/ t) (4.59)
184
i i
i i
i i
185
i i
i i
i i
186
i i
i i
i i
Proposition 4.24. Let {St }t0 be the stochastic process of the prod-
ucts value which, according to according to Definition 4.1, is
described by the following stochastic differential equation under
the risk-neutral probability measure Q
S0 = NC ic (4.2)
Then, the closed formula for the cumulative probability of the first
hitting times of this process for the barrier NC is given by
1
(r rc 2 2 )t ln[NC /S0 ]
Q(S,NC t) = N
t
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
(r rc 12 2 )t ln[NC /S0 ]
N (4.71)
t
187
i i
i i
i i
188
i i
i i
i i
lim P( , t)
t
(r rc 12 2 )t ln[NC /S0 ]
= lim N
t t
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
1 2
(r rc 2 )t ln[NC /S0 ]
N
t
1 2
2(r rc 2 ) ln[NC /S0 ]
= N () + exp N () (4.74)
2
By using the asymptotic properties of the standard normal vari-
able
lim P( , t) = 1 for r rc 12 2 > 0
t
Similarly, for r rc 12 2 = 0
(r rc 12 2 )t ln[NC /S0 ]
lim P( , t) = lim N
t t t
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
(r rc 12 2 )t ln[NC /S0 ]
N
t
= N (0) + e0 N (0) (4.75)
1
Hence, recalling that N (0) = 2
lim P( , t) = 1 for r rc 12 2 = 0
t
189
i i
i i
i i
190
i i
i i
i i
lim P( , t)
1
(r rc 2 2 )t ln[NC /S0 ]
= lim N
t
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
1 2
(r rc 2 )t ln[NC /S0 ]
N
t
= N () + lim e ln[NC /S0 ] N () (4.78)
191
i i
i i
i i
1.00
0.99
0.98
0.97 = 0.15%
0.96 = 2.65%
= 5.15%
P(,t)
0.95 = 10.15%
0.94 = 22.65%
= 35.15%
0.93
= 92.65%
0.92
0.91
0.90
0 10 20 30 40 50 60
t (years)
Table 4.1 Maximum attainable probability for the cost recovery event
(r rc = 1.5%)
ic (%)
(%) 1 2 3
1.6 1 1 1
4 1 1 1
10 1 1 1
25 0.9916 0.9832 0.9747
40 0.9906 0.9812 0.9718
90 0.9901 0.9802 0.9704
192
i i
i i
i i
Table 4.2 Maximum attainable probability for the cost recovery event
(r rc = 0.5%)
ic (%)
(%) 1 2 3
1.6 1 1 1
4 1 1 1
10 1 1 1
25 0.9948 0.9895 0.9843
40 0.9919 0.9837 0.9756
90 0.9904 0.9807 0.9711
193
i i
i i
i i
2(r rc 12 2 ) ln[NC /S0 ] NC 2(r rc)
= exp ln
2 S0 2
1 2
2(r rc 2 ) ln[NC /S0 ] NC 2
= exp ln (2 (r rc ))
2 S0 3
(2(rrc 2 /2))/ 2
4(r rc) NC NC
= exp ln ln
3 S0 S0
which leads to
4(r rc) NC (2(rrc)/ )1
2
P( , t) NC
lim = ln (4.79)
t 3 S0 S0
194
i i
i i
i i
This finding suggests that if (r rc) > 0, there should exist a finite
time by which the first derivative with respect to the volatility is
strictly negative, meaning that an upward movement in the volatility
necessarily diminishes the probability of cost recovery in the weak
sense. In other words, a positive drift leads to P( , t)/ < 0;
therefore, for any fixed probability level, a product characterised by
a high volatility should need more time to reach that confidence level
with respect to a low-volatility product.
Intuitively, this recalls precisely the principle of direct proportion-
ality between the volatility of the product and the corresponding
minimum investment time horizon (more volatility, more time)
stated in Section 4.1. Hence, the open set of times that satisfy this
principle (ie, the times where P( , t)/ < 0) may be properly
defined as an admissible region of times for the research into the
minimum recommended time horizon.
Continuing with this line of reasoning, with a negative drift it is
likely that the condition P( , t)/ < 0 will never be satisfied,
although at this stage of analysis this is not a rigorous proof, but
rather still an intuition, since much of the behaviour of the cumula-
tive probability function when t and assume finite values is not
known. It can be considered that if the condition P( , t)/ < 0
is not satisfied for very large times, it will also not be satisfied for
shorter times. In fact, by observing that a negative drift corresponds
to an expected negative return, intuition suggests that in this condi-
tion an increase in the volatility may only increase the probability of
hitting the barrier of cost recovery at any time (ie, P( , t)/ > 0
for all t > 0). It follows that, for (r rc) < 0, no time will be eli-
gible for the admissible region of times, since the principle more
volatility, more time would be systematically violated.
Moreover, it may be argued that, if every time contained in the
admissible region satisfies by definition the principle just recalled,
then the minimum admissible time can be an efficient choice to solve
the problem of correctly identifying a particular level of confidence
and, obviously, also the investment time horizon to be recommended
to the investors.
In this context, and since the admissible region of times is char-
acterised by the condition P( , t)/ < 0, it must also be verified
whether this condition can occur even in the presence of a positive
drift lower than 12 2 . This verification requires the removal of the
195
i i
i i
i i
196
i i
i i
i i
Figure 4.4 Average path and first hitting time of two products with
same drift but different volatilities
100 100
(a) (b)
95 95
90 90
85 85
80 80
75 75
0 tmean5 10 15 20 0 tmean5 10 15 20
t (years) t (years)
197
i i
i i
i i
Figure 4.5 Trajectories and first hitting times of two products with same
drift but different volatilities
(a) (b)
100 100
95 95
90 90
85 85
80 80
75 75
tmean
0 5 10 15 20 0 tmean 5 10 15 20
t (years) t (years)
100 100
95 95
90 90
85 85
80 80
75 75
tmean tmean
0 1 2 3 4 5 0 1 2 3 4 5
t (years) t (years)
As is clear from Figure 4.5(a), for the product with low volatility
the first-passage times are substantially concentrated around tmean
(mainly before it). In fact, almost all trajectories take more or less the
same time to reach the barrier and this time is close to that taken by
the average path. On the other hand a high volatility (Figure 4.5(b))
entails a much bigger variability of the trajectories of the product
and, thus, also a different behaviour of the first times at which they
fully recover the costs of the investment. In particular, a significant
number of trajectories touch the barrier in a very short time (this
happens when the high volatility works fast in favour of the cost-
recovery event), while for the remaining trajectories the first-passage
times become less and less frequent (this happens when the high
198
i i
i i
i i
4.0 3.5
(a) (b)
3.5 3.0
3.0
2.5
2.5
2.0
x104
x104
2.0
1.5
1.5
1.0
1.0
0.5 0.5
0 0
0tmean 5 10 15 20 0 tmean5 10 15 20
t (years) t (years)
199
i i
i i
i i
200
i i
i i
i i
After this small period the drift effect is mainly exhausted and the
curve becomes almost flat again but in correspondence of significant
confidence levels.
When the volatility is high the drift effect loses its relevance and
the volatility effect prevails in driving the behaviour of P( , t); as
shown in part (b) of Figure 4.7, the cumulative distribution function
reaches its maximum (positive) slope at the beginning and then it
continues to increase but more and more slowly until it arrives close
to its maximum attainable probability which is lower than the one
corresponding to the low-volatility product. A similar behaviour is
also retrieved when the drift is positive but different from (r rc).
Summarising the above, it is possible to conclude that in low-
volatility environments the drift effect has a key role to discover,
although roughly, the minimum time required to achieve the break-
even of the costs. At the limit, if there is no volatility at all, all the
trajectories of the product are identical to each other and obviously
correspond to the average trajectory. Thus, they will recover the costs
at the same time and with probability 1. However, soon after leaving
this limit case and introducing a positive volatility the drift effect
alone is no longer sufficient to solve the problem of determining the
minimum time horizon for two reasons.
First, even for relatively low volatilities when there is therefore a
strong concentration of the first-passage times around the one corre-
sponding to the average path, the solution of placing the minimum
time horizon equal to tmean proves simplistic because it does not
guarantee a correct ordering of the products in accordance with the
principle more volatility, more time.
Second, when the volatility begins to be quite high, the importance
of the drift effect disappears, and it is therefore necessary to find
another criterion to determine the minimum time horizon.
For these reasons, the methodology that will be detailed in the fol-
lowing sections relies on the direct comparison between the cumu-
lative distribution functions of products with different volatility lev-
els. To understand the intuition behind this methodology, it is use-
ful to consider again the two cumulative probability distributions in
Figure 4.7 but now superimposed on the graph shown in Figure 4.8.
By observing Figure 4.8 it emerges that only for fixed probabil-
ity levels greater than or equal to that identified by the intersec-
tion between the two curves, which is denoted by (t1 , 1 ), does the
201
i i
i i
i i
1.0
0.9
1 low
0.8 high
0.7
0.6
0.5
0.4
0.3
0.2
0.1
t1
0
0 2 4 6 8 10 12 14 16 18 20
t (years)
1.0
2
0.9
0.8 1
0.7
0.6
0.5
0.4
0.3
low
0.2 high
0.1 very high
0
0 t1 2 4 t2 6 8 10 12 14 16 18 20
t (years)
202
i i
i i
i i
203
i i
i i
i i
In the search for a more efficient approach, the next step is to main-
tain the continuous volatility setting and to recover the key concept
of the intersection between cumulative distribution functions, but
according to a different local perspective. Intuitively, this requires
the exploration of the intersection point when the shift between two
consecutive volatility values shrinks to zero. In fact, by intersecting
the cumulative distribution functions of products having these two
consecutive volatilities, the time at which the probability of the cost
recovery is the same for both the products can easily be identified.
From an analytical point of view this is equivalent to determining the
minimum time horizon as being that time at which the first partial
derivative of the function P( , t) with respect to the volatility is zero.
In addition, the function evaluated at this minimum time horizon
returns the minimum confidence level consistent with the need for
a correct ordering of different products. Therefore, all things being
equal, this minimum confidence level is specific for each product
(or, equivalently, for each volatility value).
Following this conjecture, Sections 4.1.5.14.1.5.3 present a rig-
orous sensitivity analysis of the function P( , t) that will lead to
a statement (in Section 4.1.6) of a local theorem of existence and
uniqueness of the minimum time horizon. Then, by studying (in
Section 4.1.7) the behaviour of this minimum time horizon as a func-
tion of the volatility, in Section 4.1.8 strict conditions are presented
in order to arrive at a result able to globally ensure compliance with
the principle of a proper ordering of products.
204
i i
i i
i i
205
i i
i i
i i
Proof Let (r rc 12 2 ) 0.
By direct calculation of the limit
1 2 NC
lim N r rc t ln
t t 2 S0
Thus
1 2 NC
lim N r rc t ln
t t 2 S0
2
1 1 1 2 NC
= lim exp r rc t ln
t 2 2 t 2 S0
2 2
1 1 1
= lim exp (r rc) t
t 2 2 2
1
= exp( 12 []2 )
2
1
= 0+
2
and hence
1
(r rc 2 2 )t ln[NC /S0 ]
lim N = 0+ (4.86)
t t
206
i i
i i
i i
207
i i
i i
i i
and hence
(r rc 12 2 )t ln[NC /S0 ]
N
t
2(rrc)/ 2 1
NC (r rc 12 2 )t ln[NC /S0 ]
= N (4.87)
S0 t
208
i i
i i
i i
209
i i
i i
i i
Figure 4.10 Plot of the function P( , t)/ t with respect to the time t
(r rc = 3.5%, = 2.4%)
1.8
1.6
1.4
1.2
P(,t)/ t
1.0
0.8
0.6
0.4
0.2
0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)
Figure 4.11 Plot of the function P( , t)/ t with respect to the time t
(r rc = 3.5%, = 30%)
100
90
80
70
60
P(,t)/ t
50
40
30
20
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t (years)
shown by Figures 4.10 and 4.11 for the cases r rc 12 2 > 0 and
r rc 12 2 < 0, respectively.
Theorem 4.33 gives the closed formula of the first partial deriva-
tive of P( , t) with respect to the other fundamental independent
variable that affects its behaviour: the volatility.
210
i i
i i
i i
211
i i
i i
i i
212
i i
i i
i i
2(r rc 12 2 ) ln[NC /S0 ]
= exp
2
1
(r rc 2 2 )t ln[NC /S0 ] 2 ln[NC /S0 ]
N
t 2 t
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
1 2
(r rc 2 )t ln[NC /S0 ]
N
t
4(r rc) ln[NC /S0 ]
3
Factorising
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln exp
S0 2
gives Equation 4.89, ie
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
1 2
(r rc 2 )t ln[NC /S0 ] 2
N
2
t t
1
(r rc 2 2 )t ln[NC /S0 ] 4(r rc)
N
t 3
213
i i
i i
i i
20
15
P(,t)/
10
5
0 1 2 3 4 5 6 7 8 9 10 11 12
t (years)
0.8
0.7
0.6
0.5
P(,t)/
0.4
0.3
0.2
0.1
0
0.1
0.2
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40
t (years)
r rc 12 2
= (4.90)
214
i i
i i
i i
215
i i
i i
i i
1
1 1 (r rc 2 2 )t + ln[NC /S0 ] 1
= lim
t 2 t3 t t
1 2
1 ( r rc 2 )t ln[NC /S0 ]
+ lim N
t t t
1 2
( r rc )t ln[NC /S0 ] 1
N 2
t
By exploiting the equality
216
i i
i i
i i
217
i i
i i
i i
x x
1 y2 /2 y y2 /2
e dy < e dy for x < 1 (4.98)
2 2
218
i i
i i
i i
(r rc 12 2 )t ln[NC /S0 ]
x=
t
it follows that x for t 0+ . Accordingly, for t positive and
close enough to 0, the condition x < 1 is satisfied and, therefore,
Expression 4.99 becomes
(r rc 12 2 )t ln[NC /S0 ]
N
t
(r rc 12 2 )t ln[NC /S0 ]
< N (4.100)
t
From the above inequality it follows that
(r rc 12 2 )t ln[NC /S0 ] 2
N
2
t t
1
(r rc 2 2 )t ln[NC /S0 ] 4(r rc)
N
t 3
1 2
(r rc 2 )t ln[NC /S0 ] 2 4(r rc)
>N
t 2 t 3
which, recalling Equation 4.89, leads to
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
> ln exp
S0 2
(r rc 12 2 )t ln[NC /S0 ]
N
t
2 4(r rc)
2 t 3
Since
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln exp >0
S0 2
219
i i
i i
i i
and
2 4(r rc)
lim+ = +
t0 2 t 3
it is proved that
P( , t)
lim+ = 0+ (4.93)
t0
220
i i
i i
i i
P( , t)
lim = 0 (4.102)
t
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
1 2
(r rc 2 )t ln[NC /S0 ] 2
N
2
t t
(r rc 12 2 )t ln[NC /S0 ] 4(r rc)
N
t 3
It will first be proved that the limit of interest is 0, and then further
analysis will discover its sign.
By direct calculation
P( , t)
lim
t
NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
(r rc 12 2 )t ln[NC/S0 ] 2
lim N
t t 2 t
1
4(r rc) (r rc 2 2 )t ln[NC /S0 ]
lim N
3 t t
(4.103)
1
(r rc 2 2 )t ln[NC /S0 ] 2
lim N =0 (4.104)
t t 2 t
221
i i
i i
i i
222
i i
i i
i i
and thus
1
(r rc 2 2 )t ln[NC /S0 ]
lim N
t t
(r rc 12 2 )t ln[NC /S0 ] 1
N
t
2 4(r rc)
2 <0
t 3
(4.111)
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln exp
S0 2
1
(r rc 2 2 )t ln[NC /S0 ]
N >0 (4.112)
t
P( , t)
lim = 0 (4.102)
t
223
i i
i i
i i
224
i i
i i
i i
100
90 0.8
80 0.6
70 0.4
P(,t)/
60 0.2
t (years)
50 0.0
40 0.2
30 0.4
20 0.6
10 0.8
0
10 20 30 40 50 60 70 80 90
(%)
100
90 0.8
80 0.6
70 0.4
2P(,t)/2
60 0.2
t (years)
50 0.0
40 0.2
30 0.4
20 0.6
10 0.8
0
10 20 30 40 50 60 70 80 90
(%)
This theorem states that for extremely high values of volatility the
only phenomenon that is really measurable by using the tool of the
225
i i
i i
i i
226
i i
i i
i i
227
i i
i i
i i
50
45 0.8
40 0.6
35 0.4
2P(,t)/2
30 0.2
t (years)
25 0.0
20 0.2
15 0.4
10 0.6
5 0.8
0
5 10 15 20 25 30 35 40 45 50
(%)
228
i i
i i
i i
2 P( , t)
>0 for t < t1
t
if (r rc 12 2 ) < 0 (4.125)
2
P( , t)
<0 for t > t1
t
Proof Recall Expression 4.121, ie
1
2 P( , t) ln[NC /S0 ] (r rc 2 2 )t ln[NC /S0 ]
= N
t t3 t
(ln[NC /S0 ] (r rc)t)2 1 t
3t 4
Let > 0 be fixed. Since NC /S0 > 1, t > 0 and N () > 0,
Equation 4.122 is equivalent to
(ln[NC /S0 ] (r rc)t)2 1 t
=0 (4.126)
3t 4
From Equation 4.126, some elementary algebra leads to
2 4 2 2 NC 2 NC
(r rc) t + 2(r rc) ln t + ln =0
4 S0 S0
(4.127)
229
i i
i i
i i
The next two figures show the plot of the function 2 P( , t)/ t
with respect to t for different volatility levels. Figure 4.17 is related
to the regime where r rc 12 2 > 0. It is straightforward to identify
the two solutions t1 and t2 and verify that the sign of 2 P( , t)/ t
satisfies Expression 4.124.
Figure 4.18 is related to the regime where r rc 12 2 < 0. It is
easy to identify the unique solution t1 and verify that the sign of the
function 2 P( , t)/ t is consistent with Expression 4.125.
230
i i
i i
i i
Figure 4.17 Plot of the function 2 P( , t)/ t with respect to the time
t (r rc = 3.5%, = 2.4%)
160
140
120
100
2P(,t)/ t
80
60
40
20
0
20
40
0 1 2 3 4 5 6
t (years)
Figure 4.18 Plot of the function 2 P( , t)/ t with respect to the time
t (r rc = 3.5%, = 30%)
400
350
300
250
2P(,t)/ t
200
150
100
50
0
50
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t (years)
231
i i
i i
i i
NC
t2 = 4(r rc) ln + 2 2
S0
NC NC
+ 2 2 + 4(r rc) ln + 2 ln2
S0 S0
1
4(r rc)2 4
The limit of t1 as tends to 0 is given by
NC
lim t1 = lim 4(r rc) ln + 2 2
0 0 S0
NC 2
2 + 4(r rc) ln
2 + ln [NC /S0 ]
2
S0
1
4(r rc)2 4
4(r rc) ln[NC/S0 ]
= (4.131)
4(r rc)2
and, hence, simplifying
ln[NC /S0 ]
lim t1 = (4.132)
0 r rc
The limit of t2 as tends to 0 is given by
NC
lim t2 = lim 4(r rc) ln + 2 2
0 0 S0
NC 2 NC
+ 2 + 4(r rc) ln
2 + ln
2
S0 S0
1
4(r rc)2 4
4(r rc) ln[NC/S0 ]
= (4.133)
4(r rc)2
and, hence, simplifying
ln[NC /S0 ]
lim t2 = (4.134)
0 r rc
By combining Expressions 4.132 and 4.134, it follows that
ln[NC /S0 ]
lim t1 = lim t2 = (4.130)
0 0 r rc
232
i i
i i
i i
Theorem 4.42. Let (r rc) > 0. For any fixed volatility level > 0,
there exists a unique time, which is denoted by T , such that
P( , t)
=0 (4.135)
,T
233
i i
i i
i i
234
i i
i i
i i
Definition 4.44. For any fixed volatility level > 0, let T be the
corresponding minimum time. Then, the minimum confidence level,
denoted by
min , is defined as
min = P( , T ) (4.142)
T < T (4.144)
235
i i
i i
i i
0.90
_
min
0.85
0.80
0.75
0.70
0.65
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
t (years)
0.950
0.925
0.900
0.875
0.850
0 5 10 15 20 25 30 35 40 45 50
t (years)
The above corollary says that, for all the confidence levels that are
strictly greater than the minimum one, if the volatility increases then
the correspondent time horizon also increases. This phenomenon
is represented in Figures 4.19 and 4.20 in both a low and a high-
volatility environment.
236
i i
i i
i i
Figure 4.21 Plot of the function P( , t)/ and the function of the
minimum times on the space ( , t) (ic = 2%, r rc = 3.5%)
50
45 0.8
40 0.6
35 0.4
30
P(,t)/
t (years)
0.2
25 0.0
20 0.2
15 0.4
10 0.6
5 0.8
0
10 20 30 40 50 60 70 80 90
(%)
237
i i
i i
i i
238
i i
i i
i i
Proof Recalling that the dynamics of the products value over time
are modelled through the stochastic differential equation 4.1, it
follows that
EQ [STmin ] = S0 exp((r rc)Tmin ) (4.151)
EQ [STmin ] = NC (4.150)
Corollary 4.51 shows that the time Tmin (obtained as the limit of
the function of the minimum times as volatility vanishes) coincides
with the first hitting time of the average path of {St }t0 as described
in Section 4.1.5, where it was denoted by tmean . From a financial point
of view it can be observed that only for times greater than Tmin is it
guaranteed that the expected return of the product will be strictly
positive for every possible value of the volatility. Intuitively, this
suggests that, when volatility begins to gradually increase, the cor-
responding minimum times (which can be on the curve T ) should
never be shorter than Tmin in order to preserve the consistency with
the principle more volatility, more time.
Unfortunately, the results obtained so far do not support the con-
clusion that the curve of minimum times will be monotonically
increasing or at least that it will not decrease below the value of
Tmin . On the contrary, the forthcoming analytical results will clarify
that the curve T is always characterised by the presence of a max-
imum and then decreases asymptotically to 0. This clearly implies
that for any point on the curve after the maximum time the above
principle will be violated, hence requiring the adoption of suitable
corrections aimed at guaranteeing a correct ordering of the dif-
ferent products from a global perspective, as will be discussed in
Section 4.1.8.
239
i i
i i
i i
dT 2 P( , t)/ 2 |T
= 2 (4.153)
d P( , t)/ t|T
240
i i
i i
i i
lim T = 0 (4.158)
T t (4.160)
lim T = 0 (4.158)
241
i i
i i
i i
The rationale behind the peculiar shape of the function of the min-
imum times can be synthesised as follows. Extremely high volatili-
ties (which are likely to be associated with the case r rc 12 2 < 0)
imply that a significant number of trajectories of the process {St }t0
will never hit the cost-recovery barrier. For a given volatility level
the probability of this event (ie, the percentage of the never hitting
trajectories) can be calculated explicitly by simply subtracting from
1 the asymptote of the function P( , t) as time tends to infinity.3
Conversely, the remaining trajectories will recover the costs in a
very short time, implicitly exhausting the positive influence of the
volatility effect very quickly (at the limit, in an infinitesimal time).
After this shrinking time, what remains is only the negative effect of
a decreasing asymptote as the volatility increases. This means that
when the volatility becomes increasingly large the region of times
where the first partial derivative of P( , t) with respect to this vari-
able is strictly negative gradually increases. At the limit, for
this region covers the entire space of times with the consequence
that the only time that satisfies the condition P( , t)/ = 0 tends
to 0 as indicated by Equation 4.158.
This phenomenon implies that, when the curve of the minimum
times T is approaching zero, even if the partial derivative with
respect to is zero, the noise due to the high volatility is so large
that this condition is unable to distinguish anything but the neg-
ative effect of a decreasing asymptote. Consequently, these times
cannot be considered significant enough to offer useful information.
An additional criterion is needed that can be used to discern the
acceptable times on the curve T .
The following technical lemmas are intermediate steps required
to fully understand the behaviour of the function of the minimum
times. The reader may easily skip them without concern and go
straight to Theorem 4.61.
242
i i
i i
i i
dT
= F( , T )G( , T ) (4.162)
d
with initial condition
T0 = Tmin (4.163)
where
NC
F( , t) = (r rc + 12 2 )2 t2 + 2 t + ln2 (4.164)
S0
and
G( , t)
NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
2
P( , t) 1 (r rc 12 2 )t ln[NC /S0 ] 4(r rc)
N
t t 6t
(4.165)
G( , T ) > 0 (4.166)
dT 2 P( , t)/ 2 |T
= 2
d P( , t)/ t|T
243
i i
i i
i i
where
2 ln[NC /S0 ](4(r rc)T + ln[NC /S0 ])
c1 =
5 (T )3
2
2
(r rc + 2 ) T
1 2 2
(4.168)
T
4(r rc) 4(r rc) ln[NC /S0 ]
c2 = 3 +
4 2
(r rc 2 2 )T ln[NC /S0 ]
1
N
T
(r rc 12 2 )T ln[NC /S0 ] 2(r rc) T
=N (4.170)
T
244
i i
i i
i i
dT
= F( , T )G( , T ) (4.162)
d
The initial condition in Equation 4.163 is determined from Theo-
rem 4.47.
Since the terms
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln , exp
S0 2
1 2
(r rc 2 )T ln[NC /S0 ] 4(r rc)
N and
T 6t
G( , T ) > 0 (4.166)
245
i i
i i
i i
where
2
2 2 2 NC
c3 = (r rc) + + r rc + ln 2
2 2 2 S0
2 2
2 2 2 2 NC
c4 = (r rc) + + r rc + ln
2 2 2 S0
(4.177)
Moreover, it holds that
c3 c4 > 0 (4.178)
if NC /S0 e, then
( ) 0 (4.179)
Proof The result follows from Equation 4.176 and Inequality 4.178.
246
i i
i i
i i
Proof Combining Formulas 4.162 and 4.166 with Lemma 4.58 yields
dT
> 0 if T < ( )
d
dT
= 0 if T = ( ) (4.182)
d
dT
< 0 if T > ( )
d
Moreover, it holds that
(0) = Tmin (4.183)
By carefully comparing the functions ( ) and T the result
follows.
247
i i
i i
i i
50
45 0.8
40 0.6
35 0.4
2P(,t)/2
30 0.2
t (years)
25 0.0
20 0.2
15 0.4
10 0.6
5 0.8
0
10 20 30 40 50 60 70 80 90
(%)
248
i i
i i
i i
From the above theorem it is clear that the strict increasing mono-
tonicity of the function of the minimum times with respect to the
volatility holds only on the finite time interval ]Tmin , Tmax [ whose
width depends heavily on the regime of costs and on the interest
rates. Hence, in order to implement a practical method of deter-
mining the minimum recommended investment time horizon, it is
necessary to relax the strict monotonicity assumption, enabling a
less strong relationship between volatility and time, namely a weak
monotonicity of the minimum time horizon with respect to as
provided by the following definition.
Definition 4.64. Let (r rc) > 0. For any fixed volatility level > 0,
the minimum recommended investment time horizon is the time
T ]0, [ defined as
T if 0 < < max
T
=
T
(4.190)
max otherwise
where T satisfies
P( , t)
=0 (4.135)
,T
According to this definition, if i and j are two different volatil-
ity values such that i < j , then their minimum recommended
investment time horizons satisfy the following property of weak
monotonicity with respect to the volatility
T
i T j (4.191)
This correction presents several practical advantages: it is unam-
biguously defined, it allows all the minimum times associated with
extreme volatilities that experience a loss of significance to be dis-
carded and it also guarantees the correct ordering of products
characterised by different volatilities from a global perspective.
249
i i
i i
i i
P(k+1 , T
k
k
) = P(k , T
k
k
) (4.192)
lim T
k
k
= Tk (4.194)
k 0
h(k , Tk ) = 0 (4.196)
250
i i
i i
i i
Setting
k = k+1 k
and
T
k
k
= (k+1 )
(k ) = Tk
it follows that
lim T
k
k
= Tk (4.194)
k 0
and
P(k+1 , T
k
k
) = P(k , T
k
k
) (4.192)
251
i i
i i
i i
and
lim T k +1
k +1
= Tk+1 (4.205)
k+1 0
it follows that
T k +1
k +1
> T
k
k
(4.203)
where Tk k satisfies
P(k+1 , T
k
k
) = P(k , T
k
k
) (4.192)
These results may be effectively used to obtain a satisfactory
approximation of the curve of minimum times exploiting only the
cumulative probability distribution functions. In fact, if the distance
k is small enough, the intersection between two probability distri-
bution functions can be considered to be a good estimate of the corre-
sponding minimum time and the reiteration of this simple procedure
for different k suffices in order to obtain a reasonable approxima-
tion of the entire curve of minimum times. In this situation, it must
be emphasised that avoiding the calculation of partial derivatives of
different orders can be a practical advantage when a closed formula
is no longer available and subsequently there is a need to rely on
Monte Carlo simulation and numerical approximations of partial
derivatives.
252
i i
i i
i i
253
i i
i i
i i
EQ [STmin ] = NC (4.150)
254
i i
i i
i i
it follows that
Tmin
EQ [STmin ] = EQ S0 exp (rs rc 12 2 ) ds + WTmin
0
Tmin
(rs rc 2 2 ) ds EQ [e WTmin ]
1
= S0 exp
0
It is worth noting that if there are not initial costs so that S0 = NC,
then Equation 4.209 simplifies to
Tmin
(rs rc) ds = 0 (4.213)
0
255
i i
i i
i i
3
(rt rc) > 0
rt (%)
rc
2
0
0 1 2 3 4 5 6 7 8 9 10 11 10 12 13 14 15
t (years)
where {W1,t }t0 and {W2,t }t0 are two standard Brownian motions
linked by some correlation coefficient (ie, dW1,t dW2,t = dt).
Equation 4.215 governs the dynamics of the short rate; different
choices for the functionals a(t, rt ), b(t, rt ) can be made to obtain
the most common models that belong to the HeathJarrowMorton
(Heath et al 1992) family (eg, Vasicek, HoLee, HullWhite, Cox
IngersollRoss).
256
i i
i i
i i
257
i i
i i
i i
minimum time horizon works on the space of the times and not on
the space of the returns, the former being characterised by the need
for a diverse and somewhat poorer information set on the stochastic
behaviour of the products value. In fact, as will be discussed in Sec-
tion 4.1.11, on the space of the times each trajectory of the product
is considered only if and until it reaches the barrier corresponding
to the cost recovery (while the break does not exist when working
on the space of the returns). In this context, it should also be noted
that when interest rates are very volatile, the trajectories with posi-
tive drift disappear very rapidly from the analysis because they tend
to reach the barrier very quickly; therefore, after a certain time the
only surviving trajectories will be those featuring a negative drift,
meaning that these trajectories would be automatically and unfairly
overweighted and they would lead to an average drift significantly
lower than that consistent with the principle of risk neutrality.
The analysis described above may require a further specialisation
in relation to the financial structure of the product to be modelled
and, sometimes, also in relation to the specific asset-management
style adopted. This happens, for instance, in the case of risk-target
products that are committed to a given volatility range but where
the asset manager decides to take a specific skew with respect to
that range, or in the case of benchmark products. In similar situ-
ations, the time evolution of the products value can be described
by stochastic volatility models calibrated on the products volatility
term structure, such as the implied volatility of options written on
it and expiring at increasing maturities.4
From this perspective, a possible rearrangement of Equation 4.207
into a system of stochastic differential equations may be the follow-
ing
{W1,t }t0 and {W2,t }t0 are two standard Brownian motions linked
by some correlation coefficient (ie, dW1,t dW2,t = dt).
In this framework, since the more general assumptions affect only
the diffusive term of Equation 4.216, all the technical conditions
related to the existence and uniqueness of the minimum time hori-
zon continue to hold. Obviously, a closed formula for the cumulative
probability distribution of the first hitting times may not exist, but a
258
i i
i i
i i
259
i i
i i
i i
Figure 4.24 Plot of the function Q(S,NC t) with respect to the time t
with different discretisation time steps (ic = 2%, r rc = 3.5%,
= 5.15%)
1.00
0.95
0.90
0.85
0.80
Q 0.75
0.70
0.65 Daily
Weekly
0.60
Monthly
0.55
0.50
0 2 4 6 8 10 12 14 16 18 20
t (years)
1.00
Daily
0.95 Weekly
Monthly
0.90
Q
0.85
0.80
0.75
0 2 4 6 8 10 12 14 16
(%)
260
i i
i i
i i
261
i i
i i
i i
0.85
0
0.80 Q( T ) = Q( + T )
0.75 Q( T ) = Q( T )
0.70
0.65
T
* T
* T
* +
0.60
5.2 5.4 5.6 5.8 6.0 6.2 6.4 6.6
t (years)
is equal to 0.6%. The grey circle shows the exact minimum time
calculated when the volatility is supposed continuous, ie, 0.
In order to calculate the intersection points accurately, smooth
probability distribution curves are needed. In this approach, the
number of Monte Carlo simulations required to successfully imple-
ment the procedure is quite high, in the order of a million; this may
seem a significant computational burden, but bearing in mind that
the probability density of the first hitting times is often highly asym-
metrical (because the majority of the trajectories hit the barrier quite
early), the overall computational effort is not so cumbersome.
It is important to stress that, since the introduction of the two
additional curves associated with the volatility levels and
+ is related only to a purely technical procedure, there is no
need to simulate a new stand-alone product from scratch, but rather
it suffices to use the same random vector of innovations computed
for the simulated trajectories of the original product. In this way, the
numerical noise is reduced to minimum levels and the calculation
of the intersection points becomes feasible in every condition.
A final technical issue is related to the choice of the proper width
for the interval . Of course, there is not a unique that can be
considered optimal for any possible product; in fact, on the one hand
262
i i
i i
i i
263
i i
i i
i i
264
i i
i i
i i
Figure 4.27 Plot of the function Q(S,NC t) with respect to the time t
for an illiquid return-target product with implicit time horizon of five years
(ic = 2%)
1.0
0.9
0.8
0.7
0.6
Q 0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7
t (years)
265
i i
i i
i i
266
i i
i i
i i
Figure 4.28 Plot of the function Q(S,NC t) with respect to the time t
for a liquid return-target product with implicit time horizon of five years
(ic = 2%)
1.0
0.9
0.8
0.7
0.6
Q 0.5
0.4
0.3
0.2
T=5
0.1
0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)
267
i i
i i
i i
Figure 4.29 Plot of the function Q(S,NC t) with respect to the time t
for a liquid return-target product with implicit time horizon of 10 years
(ic = 2%)
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1 T = 10
0
0 1 2 3 4 5 6 7 8 9 10
t (years)
Given these premises, the idea more volatility, more time that
drives the procedure for the determination of the minimum rec-
ommended time horizon in the case of risk-target and benchmark
products becomes less plausible,7 while the problem of properly
exploiting the information contained in the probability distribution
of the first hitting times according to principles 1 and 3 of Section 4.1
remains open. Again, the choice of a unique confidence level for all
the products belonging to the category of return-target products that
unambiguously identifies the time horizon on the cumulative prob-
ability distribution function is not so appealing, since it introduces
a high degree of arbitrariness, which is hard to justify.
A practical and sound solution, very straightforward to imple-
ment, is to abandon the intuition to connect the minimum recom-
mended time horizon with a specific level of probability of cost
recovery, and to calculate the expected value of the random vari-
able of S,NC , after having properly modelled the stochastic pro-
cess {St }t0 of the products value. In formal terms, this implies the
following definition of minimum recommended time horizon for
return-target products.
Definition 4.71. The minimum recommended investment time
horizon for a return-target product is the time T ]0, T ] such that
T = EQ [S,NC ] (4.218)
268
i i
i i
i i
269
i i
i i
i i
270
i i
i i
i i
271
i i
i i
i i
1 For return-target products traded on efficient markets or endowed with liquidability enhance-
ments that allow an early exit from the investment under favoured economic conditions, some
other elementary properties have to be taken in account. See Sections 4.2 and 4.2.2.
2 It is worth recalling that, for r rc 12 2 > 0, by Proposition 4.25 it holds that
lim P( , t) = 1
t
3 It is worth recalling that, by Proposition 4.25, the asymptote has an explicit formula, ie,
2
(NC /S0 )2(rrc)/ 1 .
4 Moreover, when considering actively managed benchmark products, the correct representa-
tion of their random dynamics may require the addition of some noise to the benchmarks
stochastic volatility model in order to make the recommended minimum time horizon sensi-
tive to the effect of possible departures from the benchmark due to specific decisions by the
asset manager.
5 Analogous considerations can easily be extended to non-equity products assisted by financial
guarantees.
272
i i
i i
i i
6 This constraint does not apply only if the return-target product does not have an implicit (and
non-stochastic) time horizon.
7 Moreover, the technical difficulty of properly identifying and handling an unambiguous
source of volatility for the heterogeneity of financial structures that belong to the category of
return-target products seems very hard to overcome.
8 This condition is not satisfied in the framework described in Section 4.1. In fact, if (r
rc 12 2 ) < 0, then EQ [S,CN ] = .
273
i i
i i
i i
i i
i i
i i
275
i i
i i
i i
276
i i
i i
i i
277
i i
i i
i i
1.00
0.95
0.90
0.85
0.80
0.75
0.70 = 2%
= 3%
0.65 + = 4%
0.60
0 1 2 3 4 5 6
t (years)
278
i i
i i
i i
(a)
115
110
105
100
95
90
85
80
(b)
10
5
% 0
5
Oct Dec Feb Apr Jun Aug Oct Dec
2009 2009 2010 2010 2010 2010 2010 2010
279
i i
i i
i i
25
24
23
22
Oct Nov Dec Jan
2010 2010 2010 2011
280
i i
i i
i i
0.975
0.970
0.965
0.960
= 21%
0.955 = 23%
+ = 25%
0.950
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
t (years)
281
i i
i i
i i
issued by a bank whose average annual credit spread over the period
spanned by the life of the product is around 125 basis points (bp).
The issuer provides a service of liquidability enhancement which
consists in locking the credit spread used to determine the fair value
of a bond on the secondary venue at a value determined at a time
close to the issue date and in taking the commitment to buy back the
bond at any early date decided by the investor.
Figure 5.7 shows the product information sheet for this bond,
which gives a brief overview of the product, followed by the repre-
sentation of its the riskreturn profile according to the three pillars
of the risk-based approach.
The values reported in the financial investment table (including
the fair value and its decomposition in the risky and the risk-free
components) are calculated according to the methodology described
in Sections 2.2 and 2.3.1. The net fair value represents the expected
discounted value of the pure payoffs structure of the product, while
the gross fair value (which includes the fair value of the service
of liquidability enhancement) also depends on the specific micro-
structural conditions of the trading venue available if exiting before
maturity. Investors interested in the possibility of an early redemp-
tion should consider the liquidability enhancement as part of the fair
value of the product, while buy-and-hold investors should consider
the value of this service as a pure cost item.
Figure 5.8 illustrates the graphical comparison between the risk-
neutral densities of the final values of the product and of the risk-free
asset, respectively.
The table of probabilistic scenarios as it appears in the product
information sheet of Figure 5.7 is obtained by applying to the above
densities the superimposition technique described in Section 2.3.3,
Chapter 2.
The bimodal shape exhibited by the probability density of the
bond reflects the default risk of the issuer under the standard hypoth-
esis of a recovery value of 40% for a senior note. The probability of
realising negative returns is around 9.5% and it discloses clearly the
credit risk of the issuer as resulting from an annual average credit
spread of around 125bp over a time horizon of five years. Most of
the remaining probability mass of this bond is placed in the scenario
in line with the final value of the risk-free asset which, in fact,
has a probability of more than 87%. In order to better explain the
282
i i
i i
i i
behaviour of the product with respect to the risk-free asset, the con-
ditional mean of the non-equity product corresponding to this sce-
nario is flanked by the mean value of an investment in the risk-free
asset with the same maturity. This choice follows from the analysis
performed in Section 2.3.3. In the case considered in this example,
this additional information clarifies that the bond performs better
than the risk-free asset, the former having a mean value of 115.6 and
the latter of 114.1.
In order to complete the representation of the riskreturn profile
of this product, it can be seen that the spread paid by the issuer
283
i i
i i
i i
Figure 5.8 Partition of the risk-neutral density of the bond with respect
to the point of zero return and to the two fixed positive thresholds
identified with the superimposition technique
2.5
2.0
104
1.5
1.0
0.5
0
20 40 60 80 100 120 140 160 180
284
i i
i i
i i
140
130
120
110
Bond value
100
90
80
70
60
50
40
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0
t (years)
The red line represents the unique possible holding period for this product under
the hypothesis of absence of the liquidability enhancement.
285
i i
i i
i i
where NAVt denotes the net asset value of the portfolio at time t and
Floort denotes the value at time t of the capital to be protected at the
five-year maturity.
The exposure in the risky asset is variable over time according to
the following relationship
Et = M Cushiont
286
i i
i i
i i
287
i i
i i
i i
5.0
4.5
Density of the product
4.0 Density of the risk-free asset
3.5
3.0
104
2.5
2.0
1.5
1.0
0.5
0
0 100 120 140 160 180 200 220 240 260 280 300
288
i i
i i
i i
180
170
160
150
VPPI value
140
130
120
110
100
90
80
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)
114). The probability table also offers a clear hint about the chance
of the product performing very well: the scenario higher than the
final value of the risk-free asset shows a non-negligible probability,
complemented by a very high mean value. In these cases the prod-
uct is exploiting the favourable conditions on the equity markets,
increasing the portfolio exposure to the risky component.
Figure 5.13 illustrates some simulated trajectories of the VPPI
product over the implicit time horizon of five years.
This figure presents two different behaviours for the possible tra-
jectories: a first set of trajectories is characterised by low values for
the product and low volatilities; in these occurrences, the equity
markets perform poorly and so the managing algorithm switches to
relatively safe solutions of asset allocation. The second set of trajec-
tories shows high values for the portfolio and high volatilities that
correspond to a switch to risky positions performed by the managing
algorithm.
The degree of risk is determined according to the methodology
described in Section 3.6, that is, by taking the expected value of the
annualised volatilities of the daily returns of each simulated tra-
jectory and then comparing this number with the optimal grid of
Figure 3.1. The average annualised volatility obtained in this way
is around 5.6% and it therefore indicates that the risk class of the
product is mediumhigh. The recommended investment time hori-
zon is that implicit in the protection mechanism and it is therefore
represented by the contractual maturity of five years.
289
i i
i i
i i
290
i i
i i
i i
291
i i
i i
i i
15
Density of the risk-free asset
Density of the product
10
104
0
0 20 40 60 80 100 120 140 160
140
120
100
Certificate value
80
60
40
20
0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)
of the risk-free asset and the associated mean value mainly reflects
the potential performances of the product when the event described
in point 3 occurs.
Figure 5.16 illustrates some simulated trajectories of the index-
linked certificate over the implicit time horizon of five years.
From a first-level analysis, it is easy to observe that the volatility
of the simulated trajectories tends to decrease over time since the
uncertainty about the possible payoffs naturally diminishes as time
292
i i
i i
i i
Figure 5.17 Evolution of the interest rates over the period January
2008April 2011
2
6-month Euribor
1 20-year IRS
Fixed rate
0
Jan Jan Jan Jan
2008 2009 2010 2011
Source: Datastream.
293
i i
i i
i i
294
i i
i i
i i
20,000
17,500
15,000
12,500
10,000
7,500
5,000
2,500
0
15 10 5 0 5 10 15 20
%
is above 8.5%, the debtor will have to pay a cashflow indexed only
to this maximum rate.
The characteristics of the original liability (ie, the old structure
according to the logic of Section 2.3.4) and of the restructured liabil-
ity with the collar (ie, the new structure according to the logic just
recalled) are summarised in Figure 5.18.
To understand whether the transition from the original structure
of fixed-rate payments to the new structure provided by the collar is
effectively able to improve the funding conditions of the debtor, the
two alternative financial positions are compared from a probabilistic
point of view. In particular, the trajectory-by-trajectory technique
described in Section 2.3.4 is used to obtain the risk-neutral density
of the percentage variations in the funding costs associated with
the switch from the old liability to the new one of this example.
Figure 5.19 shows the shape of this density.
The sign of this density is crucial in order to divide it into two
complementary events which can be defined as follows.
1. The funding costs associated with the new liability are lower
than those associated with the old liability.
2. The funding costs associated with the new liability are higher
than those associated with the old liability.
295
i i
i i
i i
Table 5.1 Table of probabilistic scenarios with conditional means for the
exchange of the old fixed-rate liability with the new structured liability
embedding the collar
Average
variation of the
Probabilities funding costs
Scenarios (%) (%)
The probability table that indicates the probabilities and the con-
ditional mean values associated with the two listed scenarios is
reported in Table 5.1.
This table shows that surrendering the original fixed-rate liability
in favour of the new liability (restructured according to the inter-
est rate collar) improves the financial position of the debtor with a
probability of 36.4%, and, in these cases it leads to an average reduc-
tion of the funding costs of around 3.03%. In the remaining 63.6% of
cases it would be better to remain in the old liability, since entering
the collar would increase, on average, the funding costs by around
3.76% with respect to the payment of cashflows indexed to the fixed
rate of 6.1%.
Overall, the probabilistic comparison signals therefore that the
proposed collar is not sufficiently likely to reduce the funding costs
of the debtor in order to justify the transition to the new liability.
Technically, considering the interest rate curve and the correspond-
ing volatility surface observed on the market in April 2011, the num-
bers inside Table 5.1 indicate that, given the residual life of the debt
and the amortisation plan used, the collar proposed has a pair of min-
imum and maximum rates (ie, 4.5% and 8.5%, respectively), which,
due to the spread of 240bp, do not sufficiently protect the debtor from
unfavourable movements of the floating-rate applied (the six-month
Euribor).
The message conveyed by the table of probabilistic scenarios is
supplemented by the additional indication of the initial theoretical
value of the collar, which is equal to 3,943. This positive theoretical
296
i i
i i
i i
Table 5.2 Conditional values on the tails for the exchange of the old
fixed-rate liability with the new structured liability embedding the collar
Conditional
values (%)
1 A (long) interest rate collar is an interest rate derivative where the holder makes payments
indexed to a floating interest rate, but if the reference floating rate is below (above) a deter-
mined minimum (maximum) rate they cannot pay less (more) than the agreed minimum
(maximum). Therefore, it is a derivative that offers protection against an increase in the float-
ing interest rate above a certain upper threshold and, at the same time, if the floating rate
falls below a specific lower threshold, it requires the holder to pay a cashflow calculated on
this lower threshold. Technically, it is obtained (Hull 2009) from the combination of a long
position in an interest rate cap (whose strike rate determines the upper threshold) and a short
position in an interest rate floor (whose strike rate determines the lower threshold).
297
i i
i i
i i
i i
i i
i i
Conclusions
299
i i
i i
i i
300
i i
i i
i i
CONCLUSIONS
301
i i
i i
i i
302
i i
i i
i i
CONCLUSIONS
303
i i
i i
i i
REFERENCES
Abramowitz, M., and I. Stegun, 1964, Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables, National Bureau of Standards Applied Mathematics Series,
Number 55 (Washington, DC: US Department of Commerce).
Albanese, C., and G. Campolieti, 2006, Advanced Derivatives Pricing and Risk Management
(Elsevier).
Bjrk, T., 2009, Arbitrage Theory in Continuous Time (Oxford University Press).
Cox, J. C., J. E. Ingersoll and S. A. Ross, 1985, A Theory of the Term Structure of Interest
Rates, Econometrica 53, pp. 385407.
Duan, J., 1997, Augmented GARCH(p, q) Process and Its Diffusion Limit, Journal of
Econometrics 79, pp. 97127.
Harrison, J. M., and S. R. Pliska, 1981, Martingales and Stochastic Integrals in the Theory
of Continuous Trading, Stochastic Processes and Their Applications 11(3), pp. 215260.
Heath, D., R. A. Jarrow, and A. Morton, 1992, Bond Pricing and the Term Structure of
Interest Rates: A New Methodology for Contingent Claims Valuation, Econometrica 60(1),
pp. 77105.
Heston, S. L., 1993, A Closed-Form Solution for Options with Stochastic Volatility with
Applications to Bond and Currency Options, The Review of Financial Studies 6(2), pp. 327
343.
Hull, J., 2009, Options, Futures and Other Derivatives, Seventh Edition (Englewood Cliffs,
NJ: Prentice Hall).
Hull, J., and A. White, 1990, Pricing Interest-Rate Derivative Securities, The Review of
Financial Studies 3(4), pp. 573592.
Karatzas, I., and S. E. Shreve, 2005, Brownian Motion and Stochastic Calculus, Second Edition
(Springer).
Markowitz, H. M., 1959, Portfolio Selection: Efficient Diversification of Investments (New York:
John Wiley & Sons).
Merton, R. C., 1974, On the Pricing of Corporate Debt: The Risk Structure of Interest
Rates, Journal of Finance 29(2), pp. 449470.
Minenna, M., 2003, The Detection of Market Abuse on Financial Markets: A Quantitative
Approach, Quaderno di Finanza, Volume 54 (Rome: Commissione Nazionale per le Societ
e la Borsa).
Minenna, M., G. M. Boi, A. Russo, P. Verzella, and A. Oliva, 2009, A Quantitative Risk-
Based Approach to the Transparency of Non-Equity Investment Products, Quaderno di Finanza,
Volume 63 (Rome: Commissione Nazionale per le Societ e la Borsa).
304
i i
i i
i i
CONCLUSIONS
Revuz, D., and M. Yor, 1999, Continuous Martingales and Brownian Motion, Third Edition
(Springer).
305
i i
i i
i i
i i
i i
i i
Index
A D
i i
i i
i i
308
i i
i i
i i
INDEX
309
i i
i i
i i
310
i i
i i