Anda di halaman 1dari 343

A Quantitative Framework to Assess the Risk-Reward Profile of Non-Equity Products

The financial and economic crisis that began


in 2007 has demonstrated the shortcomings in
Praise for Marcello Minennas A Quantitative
Framework to Assess the Risk-Reward Profile of
A Quantitative
Framework to
traditional methods for risk assessment. Finding Non-Equity Products:
new ways to identify risks is imperative. The
search for improved synthetic risk indicators is This book constitutes an excellent collection
a major topic of international debate.

Assess the Risk-


of methods for approaching non-equity products
in this post-crisis era.
A Quantitative Framework to Assess the Risk- From the foreword by HLYETTE GEMAN,
Reward Profile of Non-Equity Products, written Director of the Commodity Finance Centre and

Reward Profile
by Marcello Minenna (acknowledged as ESCP Europe, Birkbeck, University of London
the quant regulator by Risk magazine
and professor of advanced courses in the
field of financial mathematics), provides a This book fills the gap that exists between the

of Non-Equity
comprehensive guide to a new infrastructure risk management tools available to industry
for risk management, introducing original insiders, and those available to investors. It is a
methods and measures to assess the risk profile welcome contribution that will be helpful
of non-equity financial products. to anyone who needs to assess the risk of

Products
non-equity products.
The book is divided into a three-pillar JAKSA CVITANIC,
risk-based approach, founded on the careful Professor of Mathematical Finance,
analysis of the financial structure of any non- Caltech
equity product.

Three risk indicators: Rigor and clarity characterize this methodology By Marcello Minenna
price unbundling and probabilistic to assess the risk of every non-equity product.
scenarios; Well established stochastic techniques are applied
contributors: Giovanna Maria Boi, Paolo Verzella,
the degree of risk; and in an original way to convey the key information
Antonio Russo, Mario Romeo and Diego Monorchio
the recommended investment time on the time horizon, the degree of risk, the costs
horizon and potential returns of the investment and
reveal the material risks of various non- therefore to match the investors preferences in
equity products. The book provides a detailed terms of liquidity attitude, risk taking, desired
illustration of the analytical tools underlying returns and acceptable losses.
these three indicators. SVETLOZAR RACHEV, By Marcello Minenna
Department of Statistics and Applied Probability,
A Quantitative Framework to Assess the Risk-Reward University of California at Santa Barbara
Profile of Non-Equity Products offers a way for
financial institutions, investors, structurers,
regulators, issuers and academics to better
assess, understand and describe the risks
associated with non-equity products and make
a meaningful comparison between them.

PEFC Certified

This book has been


produced entirely from
sustainable papers that
are accredited as PEFC
compliant.
www.pefc.org
i i

minenna 2011/9/5 13:53 page i #1


i i

A Quantitative Framework
to Assess the RiskReward Profile
of Non-Equity Products

i i

i i
i i

minenna 2011/9/5 13:53 page ii #2


i i

i i

i i
i i

minenna 2011/9/5 13:53 page iii #3


i i

A Quantitative Framework
to Assess the RiskReward Profile
of Non-Equity Products

Marcello Minenna

i i

i i
i i

minenna 2011/9/5 13:53 page iv #4


i i

Published by Risk Books, a Division of Incisive Media Investments Ltd

Haymarket House
2829 Haymarket
London SW1Y 4RX
Tel: + 44 (0)20 7484 9700
Fax: + 44 (0)20 7484 9797
E-mail: books@incisivemedia.com
Sites: www.riskbooks.com
www.incisivemedia.com

2011 Incisive Media


ISBN 978-1-906348-59-5
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library

Publisher: Nick Carver


Commissioning Editor: Jennifer Gibb
Managing Editor: Lewis OSullivan
Developed by: Alice Levick
Designer: Lisa Ling
Copy-edited and typeset by T&T Productions Ltd, London
Printed and bound in the UK by Berforts Group Ltd

Conditions of sale
All rights reserved. No part of this publication may be reproduced in any material form whether
by photocopying or storing in any medium by electronic means whether or not transiently
or incidentally to some other use for this publication without the prior written consent of
the copyright owner except in accordance with the provisions of the Copyright, Designs and
Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency
Limited of 90, Tottenham Court Road, London W1P 0LP.
Warning: the doing of any unauthorised act in relation to this work may result in both civil
and criminal liability.
Every effort has been made to ensure the accuracy of the text at the time of publication, this
includes efforts to contact each author to ensure the accuracy of their details at publication
is correct. However, no responsibility for loss occasioned to any person acting or refraining
from acting as a result of the material contained in this publication will be accepted by the
copyright owner, the editor, the authors or Incisive Media.
Many of the product names contained in this publication are registered trade marks, and Risk
Books has made every effort to print them with the capitalisation and punctuation used by the
trademark owner. For reasons of textual clarity, it is not our house style to use symbols such
as TM, , etc. However, the absence of such symbols should not be taken to indicate absence
of trademark protection; anyone wishing to use product names in the public domain should
first clear such use with the product owner.
While best efforts have been intended for the preparation of this book, neither the publisher,
the editor nor any of the potentially implicitly affiliated organisations accept responsibility
for any errors, mistakes and or omissions it may provide or for any losses howsoever arising
from or in reliance upon its information, meanings and interpretations by any parties.

i i

i i
i i

minenna 2011/9/5 13:53 page v #5


i i

To my daughter, Michela

i i

i i
i i

minenna 2011/9/5 13:53 page vi #6


i i

i i

i i
i i

minenna 2011/9/5 13:53 page vii #7


i i

Contents
About the Author xi

List of Contributors xiii

Foreword xv

Preface xix

Acknowledgements xxv

List of Figures xxvii

List of Tables xxxi

1 Introduction 1

2 The First Pillar: Price Unbundling and Probabilistic Scenarios 13


2.1 The risk-neutral density of a non-equity product 16
2.2 Price unbundling via the financial investment table 20
2.3 First pillar and non-elementary products 24
2.3.1 Increasing the detail of the financial investment table 26
2.3.2 The table of probabilistic scenarios 30
2.3.3 Methodology to build the table of probabilistic
scenarios 38
2.3.4 Probabilistic scenarios for non-equity exchange
structures 53
2.4 First pillar and elementary products 67
2.5 Closing remarks 73

3 The Second Pillar: Degree of Risk 77


3.1 Methodology to calibrate an optimal grid 80
3.2 The model for the automatic asset manager 83
3.3 The model to simulate the volatility 87
3.4 The predictive model for the volatility 89
3.4.1 The diffusion limit of the M-Garch(1,1) 92
3.4.2 Distributive properties and volatility prediction
intervals 103
3.4.3 Estimation of the parameters 107
3.5 Management failures and the optimal grid 124
3.5.1 Definition of management failures and introduction
to the calibration problem 126
3.5.2 Relation between relative widths and management
failures 131

vii

i i

i i
i i

minenna 2011/9/5 13:53 page viii #8


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

3.5.3 The optimal grid on the reduced space of volatilities


[0 , n ] 139
3.5.4 The optimal grid on the full space of volatilities
[0, +[ 144
3.6 Risk classification 147
3.7 Detecting migrations 150
3.8 Closing remarks 156

4 The Third Pillar: Recommended Investment Time Horizon 161


4.1 The minimum time horizon for risk-target and benchmark
products 163
4.1.1 The strong characterisation of the cost-recovery
event 166
4.1.2 The weak characterisation of the cost-recovery event 173
4.1.3 The closed formula for the cumulative probability
of the first-passage times 178
4.1.3.1 The case of the standard Brownian motion 178
4.1.3.2 The case of the arithmetic Brownian motion 181
4.1.3.3 The case of the geometric Brownian motion 185
4.1.3.4 The case of the geometric Brownian motion
specific to the product 187
4.1.4 Asymptotic analysis 187
4.1.5 Sensitivity analysis 196
4.1.5.1 First-order partial derivatives 204
4.1.5.2 Limit representations of the first-order
partial derivative with respect to the
volatility 213
4.1.5.3 Second-order partial derivatives 226
4.1.6 Existence and uniqueness of the minimum time
horizon for local correct ordering 233
4.1.7 The function of the minimum times 237
4.1.8 Existence and uniqueness of the minimum time
horizon for a global correct ordering 248
4.1.9 Switching to a discrete volatility setting 250
4.1.10 Extensions to more general dynamics for the process
{St }t0 252
4.1.11 Technical remarks 259
4.2 The recommended time horizon for return-target products 263
4.2.1 Illiquid products 264
4.2.2 Liquidity and liquidability 265
4.3 Closing remarks 270

5 Some Applications of the Risk-Based Approach 275


5.1 A risk-target product 277
5.2 A benchmark product 278

viii

i i

i i
i i

minenna 2011/9/5 13:53 page ix #9


i i

CONTENTS

5.3 Return-target products: the case of a plain-vanilla bond


with significant credit risk 281
5.4 Return-target products: the case of a VPPI product 286
5.5 Return-target products: the case of an index-linked certificate 290
5.6 Non-equity exchange structures: the case of a collar
replacing a fixed-rate liability 293

6 Conclusions 299

Index 307

ix

i i

i i
i i

minenna 2011/9/5 13:53 page x #10


i i

i i

i i
i i

minenna 2011/9/5 13:53 page xi #11


i i

About the Author

Marcello Minenna, acknowledged by Risk


magazine as the quant enforcer and the
quant regulator, is the head of the quan-
titative analysis unit at CONSOB (Com-
missione Nazionale per le Societ e la
Borsa, the Italian Securities and Exchange
Commission), where he develops quan-
titative models for surveillance and sup-
ports the enforcement and regulatory units
in their activities. Marcello teaches at sev-
eral universities and holds courses for
practitioners in the field of financial math-
ematics around the world. He graduated
in economics from Bocconi University and
received his MA and PhD in mathematics
for finance from Columbia University and
from the State University of Brescia. He is
the author of several publications includ-
ing the Risk Books bestseller A Guide to
Quantitative Finance.

xi

i i

i i
i i

minenna 2011/9/5 13:53 page xii #12


i i

i i

i i
i i

minenna 2011/9/5 13:53 page xiii #13


i i

List of Contributors

Giovanna Maria Boi is a senior analyst at the CONSOB quantita-


tive analysis unit. She graduated in economics from Bocconi Univer-
sity and received her MA, MS and PhD in mathematics for finance
from Columbia University, Bocconi University and Milano Bicocca
University.
Paolo Verzella is a senior analyst at the CONSOB quantitative
analysis unit. He was assistant professor in mathematical finance at
Milano Bicocca University and has taught in several Italian univer-
sities. Paolo received his PhD in mathematics for financial markets
from Milano Bicocca University.
Antonio Russo is a senior analyst at the CONSOB quantitative
analysis unit. He previously worked as an economic analyst at the
Italian Ministry of Economy and Finance.
Mario Romeo is an analyst at the CONSOB quantitative analysis
unit. He received his PhD in mathematics from the Scuola Normale
Superiore in Pisa.
Diego Monorchio is an analyst at CONSOB quantitative analysis
unit. He received his PhD in physics from the University Federico II
of Napoli.

xiii

i i

i i
i i

minenna 2011/9/5 13:53 page xiv #14


i i

i i

i i
i i

minenna 2011/9/5 13:53 page xv #15


i i

Foreword

Excessive risk-taking by financial players played a leading role in


the financial crisis that started in 2007. Some of these risks were
taken in order to realise highly speculative strategies that led to
bankruptcy for some very large banks and threatened the stability of
the international financial system altogether. The natural response
to this situation has been the introduction of specific regulations
within countries, aimed at obliging market participants to abide by
better rules and transparency. Specific interventions have also been
adopted, within countries or at the international level, in particu-
lar with the introduction of Basel III, the stricter capital require-
ments for financial intermediaries and a greater transparency in the
financial products sold to retailers. All major regulators have taken
action in this regard by initiating substantial reforms on the issues
of offerings of investment products and selling rules at the points of
sale. The solutions adopted in the different countries were inspired
by the common finding of the high opacity that has characterised
the markets of financial products, often favoured because of their
complexity.
In fact, the excessive complexity of products like some squared
collateralised debt obligations and bespoke that traded prior to the
crisis made them impossible to price and, hence, to properly assess
their riskiness. The choice of the mathematical models used in their
valuation, such as the use of a Gaussian copula to link the distri-
butions of the different components of a collateralised debt obli-
gation or the wrong choice of the stochastic intensity of the credit
events arrival, was not the main issue as was sometimes stated in
the financial press. The first problem was the non-disclosure of their
constituents, and this should have been a greater concern for the
rating agencies. Even when we travel away from the class of equity
derivatives into credit or commodities, we need to remember that the
first merit of the deservedly famous BlackScholesMerton formula
for call options is the simplicity of the setting: two non-redundant
traded assets; and the first fundamental assumption was the fact that
the stock underlying the derivative would be continuously traded,
ensuring at all times the observability of its market price. As we

xv

i i

i i
i i

minenna 2011/9/5 13:53 page xvi #16


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

know, we can replace the geometric Brownian motion by a positive


mean-reverting diffusion or even a simple Poisson process, and all
results persist, except for the simplicity of the closed-form solution.
The market completeness created by the assumptions of this ele-
gant model is probably the main reason why it received the Nobel
prize in 1997 and why it continues to be taught at the start of all
courses on derivatives. This completeness was absent from most
credit derivatives markets, for a large number of converging reasons.
The riskreturn profile of the various non-equity products is a
crucial step for investors who have decided to diversify their invest-
ment strategies. But the information, and, in particular, the precon-
tractual documentation, provided to them has to efficiently convey
the essential description of the risks embedded in each financial
product, beyond the purely descriptive features of the instrument.
This book presents a quantitative approach to the measurement
and representation of the risks of non-equity products that comes
from a simple and winning intuition: the information needs of retail
investors are not really different from those of financial institutions,
since they both want the upside gain by trying to contain the down-
side risk. Starting from this intuition we develop three pillars
charged to unveil the risks of non-equity products.
With the first pillar the investor is given not only the fair value
of the product but also the corresponding probabilistic scenarios
at the expiry date. This device proves to be very useful for quickly
catching the behaviour of the probability distribution of the final
results of the product and placing it in a clear table within the reach
of any retail investor. The second pillar records the placement of
the products volatility inside a grid of subsequent intervals so that
investors can achieve a clear indication of the whole risk associated
with the product. And the third pillar completes the framework
with an indicator aimed at identifying the optimal maturity of the
product, namely how long the product should be kept in order to
get a break-even result. The investor is therefore able to compare the
optimal time horizon with their expected time horizon and know
when an asset is expected to produce cashflows.
Regarding the techniques introduced in the book, tools like the
powerful numraire change are employed, as well as risk indica-
tors such as the improvement of diffusive generalised autoregres-
sive conditional heteroscedasticity (Garch) volatility estimators by

xvi

i i

i i
i i

minenna 2011/9/5 13:53 page xvii #17


i i

FOREWORD

constructing a grid consistent with the lifetime of the financial instru-


ment. Lastly, the optimal maturity of the product comes in an ana-
lytical expression derived from an application of the theory of first-
passage times, and the study of the hitting times to analyse the level
of break-even moves as volatility changes. Analytically, this indica-
tor comes from an application of the theory of first-passage times,
and the study of the hitting times to analyse the level of break-even
moves as volatility changes.
In summary, this book constitutes an excellent collection of
methods for approaching non-equity products in this post-crisis era.
Hlyette Geman
Birkbeck, University of London
May 2011

xvii

i i

i i
i i

minenna 2011/9/5 13:53 page xviii #18


i i

i i

i i
i i

minenna 2011/9/5 13:53 page xix #19


i i

Preface

Retail investors have always been at a disadvantage when buying


non-equity financial products. While structuring banks have access
to cutting-edge models, retail investors typically do not have the
technical expertise to understand the risks or implicit costs associ-
ated with a particular product. Instead, they are reliant on the infor-
mation provided by the banks or distributors. It is up to regulators
to set the rules of the game to ensure there is sufficient transparency,
both within the prospectus and at the point of sale.
There has been some improvement in the rules governing the mar-
keting and sale of financial products to retail investors, particularly
in the wake of the global financial crisis which began in 2007, but
there is still some way to go. Regulations still tend to use simplistic
labels to group products, and require long descriptions of their main
features rather than focusing on information that provides a realistic
snapshot of the risks.
Roughly speaking, there have been three phases of evolution in
the regulation of retail investment product offerings in Europe. The
first covers the period before the implementation of the Markets in
Financial Instruments Directive (Mifid) in 2007. During this phase,
a prospectus could run to hundreds of pages, with lengthy descrip-
tions of multiple risk factors, written in legal jargon, which were
often not specific to the individual features of the product. There
was also little analysis of the risks and the possible consequences
(either positive or negative) for the investor. As a result, the issuer
had significant leeway in choosing what to say to the investor and
how to say it.
At the same time, there was a lack of support for investors at
the point of sale. Suitability and appropriateness tests tended not
to be conducted, and distributors only had to comply with general
principles of good conduct, eg, always operating in the interests
of the client. These were not typically backed up by more detailed
rules, making it difficult to enforce effectively. Supervisors had min-
imal responsibility, as long as the prospectus covered all the risks,
regardless of whether important information was hidden in long
and virtually incomprehensible prose.

xix

i i

i i
i i

minenna 2011/9/5 13:53 page xx #20


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

The second phase coincided with the introduction of Mifid.


Crucially, the regulation imposed disclosure obligations on distrib-
utors. It also saw the start of a process to improve the transparency
of prospectuses by adding certain key information essentially, a
summary of how the product works and the main risks. However,
the information provided by the distributor was independent from
that communicated by the issuer via the prospectus, so investors
might receive information that was not consistent or comparable.
There were also weaknesses in the two sets of requirements. The
summary of key information was not determined by a robust and
objective analysis of the financial structure of the product and the
interaction between the different sources of risk. The rules instead
stipulated sub-optimal synthetic risk indicators and non-technical
solutions to illustrate potential performance what-if scenarios,
for example leaving a huge amount of discretion to the issuer.
Meanwhile, distributors were far removed from the product creation
process, so may not have completely understood all the features of
the structure. By virtue of the fact the distributor is required to hit
certain budget targets, its interests also conflict with those of the
customer.
Again, regulators had a limited role, with little attention paid to
the value of the information provided to investors. As a result, retail
investors were still not able to clearly grasp the material risks of a
product and its prospective returns.
The third phase has emerged more recently, with a revision of
European regulation on pre-contractual disclosure almost complete.
Other initiatives are also under way, including a review of Mifid
and the drawing up of guidelines for packaged retail investment
products (Prips) by the European Commission (EC). These regu-
latory changes aim to increase investor protection and in this
regard, significant progress has been made. However, there is still
a reliance on the old descriptive approach and the classification
of products based on legal or commercial labels. These categories
include non-complex, complex and super-complex products, assem-
bled or not assembled, and structured or plain vanilla but they are
always purely qualitative categories that do not provide any clear
information on the risks of the investment.
As an illustration, a recent consultation document on Prips,
published by the EC on November 26, 2010, covers structured

xx

i i

i i
i i

minenna 2011/9/5 13:53 page xxi #21


i i

PREFACE

bonds for example, a bond plus a call option but does not include
vanilla bonds. Nonetheless, these instruments involve the packaging
of two distinct sources of risk: interest rate risk and risk of default.
The latter is pure risk the risk of loss. This highlights the difference
in the quality of information disclosed to investors, and encourages
issuers to prefer these instruments without adequately compensat-
ing the buyer for the risk taken. The recent growth in the offering
of subordinated bonds (that is, a plain vanilla bond with a relevant
source of credit risk) illustrates the flaws in the status quo.
As this approach is developed further, partly as a result of some
regulatory choices, the sector will converge towards a new phase.
There are two alternatives each is different, with distinct impli-
cations for the protection of investors and competition between
financial institutions.
One possibility is that regulators decide to put constraints on the
structure of products or even prohibit the sale of certain instruments
simply because they have the wrong label. For instance, a subor-
dinated or inflation-linked product might be automatically classi-
fied as very risky and complex and deemed unsuitable for retail
investors. This type of approach appears to be gaining ground and
typically involves banning the sale of certain products to specific
groups of customers.
This would be detrimental to the whole market. Issuers would see
important channels of funding closed down, while investors who
want higher coupons and exposure to inflation, for instance, would
not be able to find a product to suit their needs.
Many have argued that retail customers do not read or under-
stand the information provided within a prospectus, and say further
revising the transparency rules would not help enhance investor
protection. But few have questioned why they have not engaged
with the prospectus and no-one has considered whether the
information provided corresponds with what the investor actually
wants. In fact, an alternative to the prohibitionist approach is to
enhance transparency and create greater synergy with the rules
on conduct.
This alternative approach would require transparency that is
focused on the financial structure of the product and the provision
of information that is genuinely critical to the investor. This can be
achieved using quantitative methods.

xxi

i i

i i
i i

minenna 2011/9/5 13:53 page xxii #22


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

This solution is not difficult to implement. Product providers


already use quantitative techniques they hire physicists and math-
ematicians to develop pricing and hedging models, which are then
used to determine whether a particular product should be placed
on the market or unwound if the risk exceeds certain thresholds.
These same models can be used to create a two-page key informa-
tion document, giving retail customers crucial information that will
help them find the investment they are looking for.
This analysis can be used by distributors or produced by inde-
pendent financial advisers to help their customers choose between
different investment alternatives hence overcoming the incon-
sistencies between the information provided at offer and time of
placement.
These considerations inspired the risk-based approach to trans-
parency for non-equity investment products illustrated in this book.
This method differentiates simple products from complex ones by
looking at the financial engineering rather than relying on simple
labels.
If the product does not contain derivatives (even implicit) and
has marginal credit risk over its time horizon, its simplicity is self-
evident. In these circumstances, it does not make sense to overload
the investor with unnecessary information: using a few basic indi-
cators such as the internal rate of return, the buyer can determine
whether an investment is suitable. A suitability or appropriateness
test should not even be required, as these are products that can be
purchased at the counter without a prospectus or financial adviser.
If the product is complex, because it either contains one or more
derivatives components or incorporates a significant credit risk
exposure, the investor must be given information that will truly
help make an assessment. The buyer must know the minimum time
within which they will, with reasonable certainty, recover the costs
that have been paid. In this way, the investor is able to check the com-
patibility of the product with their optimal holding period, ie, their
liquidity preferences. The customer must also know the overall risk
posed by the product to see if it is in line with their risk appetite, as
well as potential performance in order to compare it with the target
rate of return.
The information needed for this risk-based approach is provided
via three interconnected synthetic indicators: the recommended

xxii

i i

i i
i i

minenna 2011/9/5 13:53 page xxiii #23


i i

PREFACE

investment time horizon, the degree of risk and the probabilistic


performance scenarios.
The recommended investment time horizon makes use of the
stochastic theory of first-passage times to identify the earliest point
the investor is likely to recover the costs paid, given the risk
of the product. The degree of risk indicates the current level of
risk by comparing the volatility of its potential daily returns over
the next year with a suitably calibrated grid of volatility inter-
vals. Each one is associated with a description that will be easily
understood by the investor: high risk, low risk, etc. The proba-
bilistic scenarios, meanwhile, summarise the probability distribu-
tion of the final payout, split into four events of primary interest
for any investor: suffering a loss (negative return), or getting back
the amount invested plus a return below, above or in line with the
risk-free rate.
The validity of the quantitative approach to risk transparency
presented in this book has repeatedly been affirmed by repre-
sentatives of consumers associations and of the academic and
regulatory world.1
If the quantitative framework illustrated in this book is imple-
mented in a proper regulatory framework, it would be a crucial step
towards a fourth phase in the regulation of retail structured prod-
ucts, which has investor protection at its heart and promotes healthy
competition among financial institutions.
Complex or structured products are not evil in themselves; they
are only a problem if they are opaque and generate undue profits for
banks and other institutional operators. On the contrary: by intro-
ducing strict new transparency requirements, retail customers can
identify appropriate investment opportunities, while still enabling
product providers to profit fairly from this business.
Such a big challenge requires a revolution in the current mecha-
nisms at both financial institutions and regulators. Financial institu-
tions will have to modify their internal organisation to create strong
cooperation between the departments responsible for product engi-
neering, risk management and compliance. Regulators, on the other
hand, will have to build units responsible for implementing quanti-
tative models, which would be used for surveillance, to analyse data
provided by financial institutions and identify anomalies that could
lead to enforcement actions.

xxiii

i i

i i
i i

minenna 2011/9/5 13:53 page xxiv #24


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

In the case of false or inconsistent information, these interven-


tions could occur immediately after the start of an offer period,
and lead to a quick update in the information provided by the Key
Investor Information (KII) document. Where the information is cor-
rect but is inconsistent with the profile of the client, it would signal
an episode of mis-selling, and consequently the rules of conduct
should be engaged.
Despite the challenges, the benefits are obvious: retail investors
would have a clear, statistically meaningful summary of risks posed
by a particular product. This would represent a huge improvement
in the transparency provided to retail customers and give supervi-
sors a more defined role in their bid to prevent product mis-selling
and protect investors.2

1 For more details see Performance anxiety, Risk Magazine, February 2011, pp 4649 (available
at http://www.risk.net/risk-magazine/feature/1939516/academics-changes-performance-
scenario-rules) and The what-if scenario, Structured Products, January 2011, pp 3839
(available at http://www.risk.net/structured-products/feature/1935182/disclosure-regime
-leads-discord-italy).
2 This preface is an excerpt, with minor adaptations, from the opinion piece of the author, titled
Enter the quant regulator and published by Risk Magazine, May 2011, pp 4649 (available
at http://www.risk.net/risk-magazine/opinion/2074539/enter-quant-regulator).

xxiv

i i

i i
i i

minenna 2011/9/5 13:53 page xxv #25


i i

Acknowledgements

The risk-based three pillars approach presented in this book bears


the imprint of colleagues in CONSOB and friends from within
academia. Each of them, for various reasons, has provided a crit-
ical contribution in developing this quantitative framework for risk
measurement and transparency.
I sincerely acknowledge for the sharp think tank and inci-
sive questioning Alessandra Balbo, Adele Oliva, Marina Piccioni
and Massimo Sabbatini that have been determinant for settling the
approach. I also wish to express my gratitude to Valentina Alfi and
Giulia Sargenti for providing intensive numerical analysis able to
offer robust back-testing to the principles and concepts developed
in the book.
I acknowledge with gratitude the editorial assistance of Lewis
OSullivan and Alice Levick of Risk Books.
Lastly, I must acknowledge my wife, Antonella, for her strong
encouragement and continuous support during the drafting of
the book.

xxv

i i

i i
i i

minenna 2011/9/5 13:53 page xxvi #26


i i

i i

i i
i i

minenna 2011/9/5 13:53 page xxvii #27


i i

List of Figures

2.1 Comparison of the densities of two non-equity products with


the same fair value 31
2.2 Densities of the same non-equity product obtained with two
different models 33
2.3 Partition of the risk-neutral density of the non-equity product
with respect to the point of zero return 39
2.4 Partition of the risk-neutral density of the non-equity product
with respect to the point of zero return and two fixed positive
thresholds to be identified 40
2.5 Identification of the reference thresholds 1 and 2 on the
density of the risk-free asset 41
2.6 Partition of the risk-neutral density of the non-equity product
with respect to the point of zero return and to the two fixed
positive thresholds identified with the superimposition
technique 42
2.7 Comparison of the densities of two non-equity products with
the same probabilities 46
2.8 Comparison of the densities of the risk-free asset with
a non-equity product having a significant probability
associated with the scenario in line 51
2.9 Densities of two products involved in an exchange
hypothesis: old product versus product new1 55
2.10 Density of the differences between the product new1 and the
old product 55
2.11 Densities of two products involved in an exchange
hypothesis: old product versus product new2 58
2.12 Density of the differences between the product new2 and the
old product 58
2.13 Densities of two products involved in an exchange
hypothesis: old product versus product new3 62
2.14 Density of the differences between the product new3 and the
old product 63
2.15 Density of a fixed-coupon bond with a negligible credit risk
exposure 70
2.16 Density of a floating-coupon bond with a negligible credit
risk exposure 71

3.1 Stationary density of the volatility: behaviour of an automatic


asset manager with a risk budget equal to [5%,20%] 86

xxvii

i i

i i
i i

minenna 2011/9/5 13:53 page xxviii #28


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

3.2 Simulated trajectories of the volatility of a product with a risk


budget [5%,20%] 89
3.3 Rescaling of the discrete process {ln k2 }kN 93
h
3.4 Relation between the processes {ln kh2 }kh0 and {ln t2 }t0 97
3.5 Volatility and its predictive band based on the diffusion limit
of the M-Garch(1,1) 123
3.6 Trajectories of a five-year subordinated fixed-coupon bond 149
3.7 Migrations of the Standard & Poors 500 with respect to the
optimal grid (January 2001January 2011) 154
3.8 Migrations of the Standard & Poors 500 with respect to a
non-optimal grid (January 2001January 2011) 155

4.1 Plot of the function Q(St  NC) with respect to time t


(ic = 2%, r rc = 3.5%) 169
4.2 Plot of the function Q(S,NC  t) with respect to the time t
(ic = 2%, r rc = 3.5%) 176
4.3 Plot of the function P( , t) with respect to the time t (ic = 2%,
r rc = 3.5%). 192
4.4 Average path and first hitting time of two products with
same drift but different volatilities 197
4.5 Trajectories and first hitting times of two products with same
drift but different volatilities 198
4.6 Comparison between the probability density functions of
the first hitting times of two products with same drift but
different volatilities 199
4.7 Comparison between the cumulative probability functions of
the first hitting times of two products with same drift but
different volatilities 200
4.8 Superimposition of the cumulative probability functions of
the first hitting times of two products with same drift but
different volatilities 202
4.9 Superimposition of the cumulative probability functions of
the first hitting times of three products with same drift but
different volatilities 202
4.10 Plot of the function P( , t)/ t with respect to the time t
(r rc = 3.5%, = 2.4%) 210
4.11 Plot of the function P( , t)/ t with respect to the time t
(r rc = 3.5%, = 30%) 210
4.12 Plot of the function P( , t)/ with respect to the volatility
(r rc = 3.5%, = 2.4%) 214
4.13 Plot of the function P( , t)/ with respect to the volatility
(r rc = 3.5%, = 30%) 214
4.14 Plot of the function P( , t)/ on the space ( , t) (ic = 2%,
r rc = 3.5%) 225
4.15 Plot of the function 2 P( , t)/ 2 on the space ( , t)
(ic = 2%, r rc = 3.5%) 225

xxviii

i i

i i
i i

minenna 2011/9/5 13:53 page xxix #29


i i

LIST OF FIGURES

4.16 Plot of the function 2 P( , t)/ t on the space ( , t)


(ic = 2%, r rc = 3.5%) 228
4.17 Plot of the function 2 P( , t)/ t with respect to the time t
(r rc = 3.5%, = 2.4%) 231
4.18 Plot of the function 2 P( , t)/ t with respect to the time t
(r rc = 3.5%, = 30%) 231
4.19 Plot of the function P( , T ) with respect to the time t
(ic = 2%, r rc = 3.5%); low-volatility environment 236
4.20 Plot of the function P( , T ) with respect to the time t
(ic = 2%, r rc = 3.5%); high-volatility environment 236
4.21 Plot of the function P( , t)/ and the function of the
minimum times on the space ( , t) (ic = 2%, r rc = 3.5%) 237
2 2
4.22 Plot of the function P( , t)/ on the space ( , t)
(ic = 2%, r rc = 3.5%) 248
4.23 The technical minimum time under a time-varying
deterministic interest rates curve (ic = 0%) 256
4.24 Plot of the function Q(S,NC  t) with respect to the
time t with different discretisation time steps (ic = 2%,
r rc = 3.5%, = 5.15%) 260
4.25 Plot of the function Q(S,NC  t) with respect to the volatility
with different discretisation time steps (ic = 2%,
r rc = 3.5%, t = 3 years) 260
4.26 Procedure to determine the minimum time horizon in a
discrete time-discrete volatility setting (ic = 2%, r rc = 3.5%) 262
4.27 Plot of the function Q(S,NC  t) with respect to the time t for
an illiquid return-target product with implicit time horizon
of five years (ic = 2%) 265
4.28 Plot of the function Q(S,NC  t) with respect to the time t for
a liquid return-target product with implicit time horizon of
five years (ic = 2%) 267
4.29 Plot of the function Q(S,NC  t) with respect to the time t for
a liquid return-target product with implicit time horizon of
10 years (ic = 2%) 268

5.1 Product information sheet for a risk-target non-equity product 277


5.2 Plot of the cumulative distributions of the first-passage times
used to determine the minimum recommended investment
time horizon of the risk-target non-equity product 278
5.3 Product information sheet for a benchmark non-equity
product 279
5.4 Historical values and daily returns of the benchmark
non-equity product (October 2009January 2011) 279
5.5 Historical annualised daily volatility of the benchmark
non-equity product (October 2010January 2011) 280

xxix

i i

i i
i i

minenna 2011/9/5 13:53 page xxx #30


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

5.6 Plot of the first-passage times cumulative distributions used


to determine the minimum investment time horizon of the
benchmark product 281
5.7 Product information sheet for a return-target non-equity
product: a five-year senior bond 283
5.8 Partition of the risk-neutral density of the bond with respect
to the point of zero return and to the two fixed positive
thresholds identified with the superimposition technique 284
5.9 Trajectories of a five-year senior coupon bond 285
5.10 Plot of the cumulative probability distribution of the
first-passage times used to determine the minimum time
horizon of the bond (blue line) 285
5.11 Product information sheet for a return-target non-equity
product: a five-year VPPI structure 287
5.12 Partition of the risk-neutral density of the VPPI product
with respect to the point of zero return and to the two fixed
positive thresholds identified with the superimposition
technique 288
5.13 Trajectories of the five-year VPPI product 289
5.14 Product information sheet for a return-target non-equity
product: a five-year index-linked certificate 291
5.15 Partition of the risk-neutral density of the index-linked
certificate product with respect to the point of zero return
and to the two fixed positive thresholds identified with the
superimposition technique 292
5.16 Trajectories of a five-year index-linked certificate 292
5.17 Evolution of the interest rates over the period January
2008April 2011 293
5.18 Summary of the characteristics of the old fixed-rate liability
and the new structured liability embedding the collar 294
5.19 Density of the percentage variations in the funding costs
associated with the switch from the old liability to the new
structured liability embedding the collar 295

xxx

i i

i i
i i

minenna 2011/9/5 13:53 page xxxi #31


i i

List of Tables

2.1 Layout of the table of probabilistic scenarios 44


2.2 Table of probabilistic scenarios for the product of Figure 2.6 44
2.3 Layout of the table of probabilistic scenarios with the
conditional means 47
2.4 Table of probabilistic scenarios with the conditional means
for the old product 48
2.5 Table of probabilistic scenarios with the conditional means
for the new product 48
2.6 Table of probabilistic scenarios for the product of Figure 2.8
with the conditional means 51
2.7 Layout of the table of probabilistic scenarios with the
conditional means and the (unconditional) mean of the
risk-free asset 52
2.8 Table of probabilistic scenarios for the product of Figure 2.8
with the conditional means and the (unconditional) mean of
the risk-free asset 53
2.9 Layout of the table of probabilistic scenarios for non-equity
exchange products 57
2.10 Table of probabilistic scenarios for the exchange hypothesis
corresponding to Figure 2.10 57
2.11 Layout of the table of probabilistic scenarios with conditional
means for non-equity exchange products 60
2.12 Table of probabilistic scenarios with conditional means for
the exchange of the old product with the product new1 61
2.13 Table of probabilistic scenarios with conditional means for
the exchange of the old product with the product new2 61
2.14 Layout of the conditional values on the tails for a non-equity
exchange product 64
2.15 Conditional values on the tails for the exchange hypothesis
old versus new1 65
2.16 Conditional values on the tails for the exchange hypothesis
old versus new3 65

3.1 The optimal grid 147

4.1 Maximum attainable probability for the cost recovery event


(r rc = 1.5%) 192
4.2 Maximum attainable probability for the cost recovery event
(r rc = 0.5%) 193

xxxi

i i

i i
i i

minenna 2011/9/5 13:53 page xxxii #32


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

5.1 Table of probabilistic scenarios with conditional means for


the exchange of the old fixed-rate liability with the new
structured liability embedding the collar 296
5.2 Conditional values on the tails for the exchange of the
old fixed-rate liability with the new structured liability
embedding the collar 297

xxxii

i i

i i
i i

minenna 2011/9/5 13:53 page 1 #33


i i

Introduction

The first and most immediate distinction inside the universe of


financial products is between equity and non-equity products. The
former have a one-to-one correspondence with the underlying risk
source, while the latter are characterised by more or less sophisti-
cated choices of financial engineering that transform certain risk fac-
tors in a new structure with a stand-alone riskreturn profile, often
specified in relation to a given time horizon.
Through this process of risk restructuring, non-equity products
increase market completeness by making available a huge variety
of financial engineering solutions that meet the specific objectives
of both financial institutions and non-professional (so-called retail)
operators.
The examination of the risks of these products, precisely because
of how they are structured, is carried out on the market through
quantitative methods. In their activities of pricing and risk mea-
surement and management, financial institutions make use of spe-
cific quantitative tools to determine the fair value of the different
products and establish appropriate risk metrics. Given the random
nature of the variables to be assessed, the quantitative models and
measures used for these purposes rely strongly on probability theory
and stochastic calculus.
Using the analytical tools popular with institutional operators,
this book presents a quantitative approach to the risks transparency
of non-equity products (the so-called risk-based approach). This
provides an informative set that is comprehensible, concise and
effective in supporting retail investors in taking their investment
decisions.
These decisions are based on a paradigm (so-called sequential
filtering) that articulates, in three phases, the process by which each
investor chooses among the various alternatives available in the
market on the basis of their liquidity attitude, their risk appetite

i i

i i
i i

minenna 2011/9/5 13:53 page 2 #34


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and, finally, considers the limits of costs and the target performances
sought in the non-equity product.
The liquidity attitude identifies the maximum holding period for
which the investor is willing to give up their cash to buy a financial
product. This attitude also signals the implicit expectation that the
investor will liquidate the product after their holding period under
economically efficient conditions (ie, with a profit) or, in any case,
having at least recovered all costs incurred.
The risk appetite represents the threshold of tolerance of the
investor to the variability of the results that can be obtained from
the non-equity product, and thus the losses they are willing to bear.
From this perspective a key role is played by an integrated assess-
ment and representation of all the risk factors of the product and the
particular ways in which these are assembled within its financial
structure.
The limits of costs indicate the maximum amount that the investor
is willing to pay in expectation of a certain performance over this
period. In this sense, a clear recognition of all the expenses that are
incurred for a non-equity investment and the knowledge of the final
payoffs that a product can offer in relation to the different possible
evolutions of the underlying risks are essential.
The decision variables described above clearly suggest the nature
of the essential information for the investors. From this perspec-
tive, the risk-based approach sets out an objective methodology
to determine and represent three synthetic risk indicators (the so-
called three pillars), all calculated using probabilistic tools, which,
in a clear, meaningful and internally consistent way, meet the infor-
mation needs that emerge when we are interested in comparing and
choosing among the various non-equity products:

the price unbundling and the probabilistic scenarios (the first


pillar);

the degree of risk (the second pillar); and

the recommended investment time horizon (the third pillar).

The first pillar of the risk-based approach relies on two com-


plementary tables the financial investment table and the table of
probabilistic scenarios to extract from the risk-neutral density of

i i

i i
i i

minenna 2011/9/5 13:53 page 3 #35


i i

INTRODUCTION

the non-equity product the core information about its value consid-
ered at two specific points in time: the issue date and the end of the
recommended investment time horizon, respectively.
The financial investment table decomposes the issue price (so-
called unbundling) to highlight the relative contribution of the fair
value and of the total costs applied, hence providing a first useful
indication about the fairness of the riskreturn profile of any non-
equity product. Where the product is quite elementary, this table,
eventually assisted by some deterministic return indicator, offers
a good synthesis of the costs paid and of the payoffs attainable at
the end of the recommended time horizon. On the other hand, non-
elementary products are typically characterised by non-linear payoff
structures or by hidden risk exposures which require further infor-
mation in order to allow a better understanding of how the under-
lying risks can concretely affect the final values of the investment. In
this perspective, inside the financial investment table, richer infor-
mation is provided by applying the portfolio replication principle
to split the fair value into its risk-free and risky components so that
the investors see how much of the product is similar to a risk-free
security and how much is instead the value of the bet inherent in the
risks of the investment.
The table of probabilistic scenarios supplements the financial
investment table by recovering some key risk information connected
to the peculiar shape taken by the risk-neutral density of the final val-
ues of the investment. In line with the principle of reduction in gran-
ularity, this density is partitioned into a few macro-events that are of
concrete interest for the average investor as they capture the perfor-
mance risk of a product at the end of the recommended time horizon.
This risk represents the products ability to create added value for the
investor both in absolute terms (ie, compared with the issue price),
and in relative terms (ie, compared with the results achievable by
investing the same amount of money in a risk-free asset). The parti-
tion technique used for this comparison is the superimposition of the
density of the risk-free asset to the one of the non-equity product. The
final output is a table containing the probabilities of four alternative
scenarios: negative performances, or positive performances lower
than, in line with or higher than those of the risk-free numraire.
Attached to the probabilities there is a synthetic indicator of the
final value of the investment conditioned to the occurrence of each

i i

i i
i i

minenna 2011/9/5 13:53 page 4 #36


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

of the four different scenarios, and, if needed, additional informa-


tion aimed at ensuring a clearer comparison with the risk-free asset
is also provided.1
The second pillar of the risk-based approach is the degree of risk.
Unlike the first pillar, which looks at two specific points in time, this
synthetic indicator summarises the overall riskiness of the product
from inception until its recommended time horizon. To this end, by
working on the simulated trajectories of the products value pro-
cess used by the first pillar, it is possible to analyse their variability
through a meaningful and straightforward risk metric: the volatility.
The degree of risk is obtained by comparing this risk metric with an
optimal grid of increasing volatility intervals, and this information
is then conveyed to investors by mapping the volatility figure into
an ordered qualitative scale of risk classes endowed with a high sig-
nalling power. Moreover, the consistency with the assumptions used
in the calibration of the grid allows for the proper definition of the
concept of migration between different risk classes, thus ensuring
timely updating of the information on the degree of risk.
From a quantitative point of view, the calibration of the grid is
therefore the core of the second pillar. Naive solutions are discarded
and the focus is on determining a grid whose intervals have an
increasing absolute width so as to comply with the general prin-
ciple: more risk, more losses. The optimal grid also meets an impor-
tant market feasibility requirement: products committed to a specific
risk budget realise a volatility which, except in cases of sudden and
significant shocks, is in line with the reasonable expectations of the
market about the future volatility. Market feasibility is researched by
the study of the so-called management failures occurring when the
volatility realised by a product anchored to a given risk budget falls
outside the volatility prediction intervals representative of the mar-
kets expectations. Such prediction intervals are obtained from the
continuous stochastic process that corresponds to the diffusion limit
of a well-specified discrete generalised autoregressive conditional
heteroscedasticity (Garch) model. The resulting optimal grid parti-
tions the space of possible volatilities into seven intervals exhibit-
ing a homogeneous and not abnormal incidence of management
failures.
The third pillar is the recommended investment time horizon. This
indicator expresses a recommendation on the holding period of the

i i

i i
i i

minenna 2011/9/5 13:53 page 5 #37


i i

INTRODUCTION

non-equity product, formulated in relation to its specific financial


structure and regime of costs. Some non-equity products feature
financial engineering that, by construction, admits only a possi-
ble implicit time horizon. In these cases the identification of the
recommended time horizon is clearly immediate.
For all non-equity products that do not fit this frame, the recom-
mended investment time horizon is determined according to the
exogenous criterion of the costs recovery given their riskiness. This
criterion is somewhat similar to assuming a target return of zero for
the investor and, from this perspective, the recommended invest-
ment time horizon indicates the minimum period within which the
costs incurred may be reasonably amortised, taking into account
the risks of the product. By applying the theory of the first-passage
times of a stochastic process for a given barrier and using the same
trajectories of the products value process underlying the first and
second pillars, it is possible to determine the cumulative probability
function of the first times when the value of the product hits a bar-
rier corresponding to the event of costs recovery. This methodology
returns a robust indicator, which gives longer horizons for products
with increasing risks.
In the presence of liquidity or liquidability features, a similar
methodology can be applied to identify a minimum time horizon
to be provided as additional information for products having an
implicit time horizon. Liquidity represents the actual possibility of
exit from the investment at any time (even before maturity) and is
therefore linked to the existence of an efficient market where the
product is traded at a fair price that reflects all the information
that is publicly available. Liquidability (which may exist even in
the absence of a secondary market, for example, when it is replaced
by alternative trading venues specifically arranged by the issuer)
depends instead on the existence of appropriate provisions aimed
at mitigating the possibility of negative fluctuations in the price of
the product and, therefore, the risk of closing the investment before
maturity at a loss.
From the brief description above, it is clear that the quantitative
methodology developed to determine the three pillars is based on
the careful analysis of the features (also including liquidity and liq-
uidability) that characterise the financial structure of any non-equity
product. For this reason, preliminary to the quantitative determina-

i i

i i
i i

minenna 2011/9/5 13:53 page 6 #38


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

tion of these indicators, the risk-based approach requires the classi-


fication of the product on the basis of the following taxonomy: risk-
target products, benchmark products and return-target products.
Risk-target products invest in any market and any financial instru-
ment in order to obtain the maximum return under a specified target
in terms of risk exposure. Hence, these products pursue specific tar-
get returns only as a secondary goal. In other words, within the tra-
ditional riskreturn approach of Markowitz (1959), where the choice
is between maximising the return for a given level of risk or min-
imising the risk given a specific target return, in these products the
asset allocation favours the first objective. To this end, ex ante mini-
mum and maximum thresholds are typically defined for the values
of a certain risk metric and such thresholds are the reference point
for the risk-taking decisions.
Benchmark products are anchored to a benchmark representative
of a specific market segment, and in relation to this benchmark the
asset-management style may be either passive or active. In the first
case, the product substantially replicates the benchmark, while in
the second case, the portfolio of assets composing the product differs
from that of the benchmark to a greater or lesser extent depending
on the objectives that the asset manager intends to achieve.
Return-target products feature financial engineering aimed at
pursuing a specific return at the end of the recommended investment
time horizon. This type of financial structure includes all products
obtained as a static or dynamic combination of risk-free (or low-risk)
assets and risky assets. It is worth noting that, among non-equity
products, return-target structures have the wider variety in terms
of possible solutions that financial engineering can offer. The rea-
son is that the combination of the risky and risk-free components is
not necessarily explicit in these structures, as it can arise from the
synthetic packaging of different risk sources or from the working
of quantitative algorithms aimed at protecting a percentage of the
value of the financial investment that can be determined at incep-
tion or subject to a revision over time depending on the evolution
of some triggering events. In addition, this family of products is
somewhat indirectly extended in the presence of financial guaran-
tees, which can be added to any type of financial structure (hence
not only return-target) and deeply modify its riskreturn profile by
making it substantially similar to that of a return-target product.

i i

i i
i i

minenna 2011/9/5 13:53 page 7 #39


i i

INTRODUCTION

The above structures pass over the apparent differences arising


from purely nominalistic approaches based on the use of formal
labels (eg, category of the issuer, distribution channel, name of the
product) that have generally guided the regulation of transparency
worldwide.
Moreover, the relationships of interdependence and reciprocal
consistency existing between the three pillars clearly emerge only
when bearing in mind the above-described taxonomy of non-equity
products; in fact, it is precisely because it disregards any formal
label, that this taxonomy is the key to getting from these pillars
an overall representation of the riskiness, the expensiveness and
the prospective performances of any non-equity product. For risk-
target and benchmark structures not assisted by financial guaran-
tees, knowledge of the degree of risk, together with the costs applied,
allows the identification, according to the break-even criterion of
these costs, of the time horizon to be recommended to investors. In
the case of return-target or guaranteed products, the recommended
time horizon that is implicit in their financial engineering repre-
sents the reference maturity to be used for all the quantitative cal-
culations underlying both the first and the second pillars. The same
horizon also represents the bridge between the assessments of the
performance risk of the product made, respectively, at the issue date,
through the financial investment table, and at maturity, through the
table of probabilistic scenarios. Not surprisingly, if we take the final
values of the investment corresponding to each of the four scenarios
and discounts them back for a period equal to the recommended
investment time horizon, the average of these discounted values is
a good proxy for the fair value of the product.
The three synthetic indicators presented in this book provide
investors with information that is able to support more enlight-
ened investment choices due to their assistance with each of the
three phases of the sequential filtering process. The recommended
investment time horizon indicates to the investor the earliest date
on which it is possible to exit from the product with a reasonable
expectation of no loss, having hence recovered all the costs. Once
the investor knows the products whose recommended time horizon
matches their liquidity attitude, by looking at their degree of risk
the investor can identify those products whose riskiness is consis-
tent with their risk appetite. Finally, in the last step of this selection

i i

i i
i i

minenna 2011/9/5 13:53 page 8 #40


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

process, the investor makes their choice among the products admis-
sible in terms of the first two indicators by reading the tables report-
ing the costs, the fair value and the probabilistic scenarios of each
investment alternative.
In order to reach retail investors, these fundamental indicators
could be included in the prospectus by providing a brief prod-
uct information sheet subject to compulsory delivery. After a quick
overview of the product, this brief document (of two pages, maxi-
mum) would highlight the material risks of the non-equity invest-
ment by displaying the three pillars determined by the issuer. As the
person who built that particular financial structure the issuer has,
hence, the best knowledge of it.
In this way, issuers would produce the synthetic indicators needed
to comply with regulatory provisions by using the same internal
models developed for their pricing and risk management activities;
in fact, the risk-based approach just defines a few basic methodolog-
ical requirements (already used in the praxis of financial markets)
without imposing any particular model to be adopted.
Such a framework would reconcile two major goals:

1. to eliminate the gap that traditionally exists between the mea-


surements made by the financial engineer who designed the
product and those that are available to the investors from the
prospectuses;

2. to allow interaction with the issuers on the themes of an effec-


tive risk representation (and not just on the formal aspects of
the prospectus) and with the distributors on the themes of an
improved product mapping.

With regard to the first goal, the requirement of a short product


information sheet featuring standard contents and formats would
focus the attention of the readers on the essential information con-
cerning each non-equity product, so that the investor would be con-
cretely assisted in their comparison of different investment possibil-
ities and, therefore, in pursuing the convergence of their investment
objectives and the characteristics of the product bought. In addition,
because of its brevity and the mandatory delivery, the product infor-
mation sheet (as opposed to long narrative prospectuses) arises as
a practical alternative to the advertising material. Bearing in mind

i i

i i
i i

minenna 2011/9/5 13:53 page 9 #41


i i

INTRODUCTION

the goal of always making the three pillars available to any possible
stakeholder, the properties of conciseness and immediate compre-
hensibility make them the best candidates for becoming the stan-
dard information set that identifies any product from the main data
providers.
With regard to the second goal, the filing of a product information
sheet, containing the three pillars, to the regulator would support the
efficacy of the initiatives undertaken by supervisors in order to guar-
antee the consistency and comprehensibility of the information con-
cerning the material risks of the product and also in order to verify
whether, at the points of sale, this information is taken into account
by distributors that have to assess the adherence of an investment
solution to the needs of their clients. In fact, each product would
be qualified by a set of quantitative metrics that react to changing
market conditions and move consistently in relation to each other
according to a precise scheme arising from the peculiar structure
of the product itself. It follows that, with only the knowledge of
the products structure and of the market data publicly available at
the issue date, regulators could objectively verify the internal coher-
ence of the information produced by the issuer through the three
pillars, ultimately leading to the early detection of incorrect or mis-
leading representations, hence giving an important preventive role
to transparency supervision.
The risk-based approach can also be used for an effective assess-
ment of the risks and costs associated with financial liabilities, whose
structures are, in fact, completely uniform and symmetrical to those
of investment products, and, like them, typically carry a specific risk
exposure or lead to revision of a pre-existing exposure, through the
additional inclusion of derivative components.
The book consists of six chapters. Chapters 24 are devoted
to a rigorous theoretical derivation of the risk-based quantita-
tive approach, while Chapter 5 exposes its application to some
non-equity products and Chapter 6 presents conclusions.
Chapter 2 describes the first pillar. The focus is on the value em-
bedded in the product: at the issue date, through the unbundling of
the issue price and at maturity, through an innovative and robust
methodology to build probabilistic scenarios that summarise the
salient features of the variability characterising the potential per-
formance of the investment. Different techniques are presented

i i

i i
i i

minenna 2011/9/5 13:53 page 10 #42


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

depending on whether the non-equity product represents an autono-


mous investment solution or is involved in the context of an
exchange with another product.
Chapter 3 addresses the second pillar. The degree of risk is
assigned to the product by comparing the annualised volatility of its
daily returns against an optimal grid of increasing volatility inter-
vals and, then, by mapping the outcome of this comparison into a
qualitative risk class identified by a clear adjective. The methodol-
ogy developed to determine the optimal grid is fully explained both
in its intuitions and in its technical aspects, which include some
fundamental results of the stochastic limit theory.
Chapter 4 is devoted to the third pillar. It illustrates how to deter-
mine the recommended investment time horizon. Throughout this
technical derivation, the probabilistic tool of the first-passage times
of the stochastic process of the products value for a constant barrier
is introduced. Different theorems of general validity are proved.
Chapter 5 offers the explicit calculation of the three pillars for
some non-equity structures, giving practical insights on the imple-
mentation issues of the risk-based approach. In particular, five dif-
ferent products are considered: a risk-target product, a benchmark
product and three return-target products. For each of them, the cor-
responding product information sheet is also presented, showing
the feasibility and the effectiveness of reporting all the information
essential to understand the product and to properly feed the decision
process of any retail investor in a few pages. In order to complete the
illustration of the possible applications of the risk-based approach,
the last section of this chapter gives an example of a non-equity
structure that intervenes to modify the cashflows and the risk pro-
file of an existing liability. The probabilistic comparison of the final
values of the liability before and after the insertion of the non-equity
structure is performed according to the methodology discussed in
Chapter 2 and it allows an assessment of whether or not the switch
to the new structured liability is suitable.
Chapter 6 presents a brief summary of the whole risk-based
approach and shows its validity as a framework for the quantita-
tive assessment of the risks of non-equity products and for their
representation to retail investors who, through the reading of the
three pillars, would have the information needed to understand the
products and, hence, to take more enlightened investment decisions.

10

i i

i i
i i

minenna 2011/9/5 13:53 page 11 #43


i i

INTRODUCTION

1 If the product to be analysed is intended to replace a pre-existing investment (as in the case
of exchange public offerings or when an investor is willing to substitute a specific non-equity
product that they are holding), the table of probabilistic scenarios is determined according to
a different methodology based on the pointwise comparison (so-called trajectory by trajec-
tory) among the final values of the two products involved in a possible exchange. A similar
methodology can easily be used to assess the opportunity to enter in a structured financial
liability or to modify the cashflows of an existing liability by combining it with a non-equity
solution like a derivative contract.

11

i i

i i
i i

minenna 2011/9/5 13:53 page 12 #44


i i

i i

i i
i i

minenna 2011/9/5 13:53 page 13 #45


i i

The First Pillar: Price Unbundling and


Probabilistic Scenarios

The first pillar of the risk-based approach allows a suitable under-


standing of the level of expensiveness and of the riskreturn profile
of any non-equity product by disclosing some items of information
embedded in its price.
The price is the first variable which attracts investors attention
and it plays a key role in their selection process between different
products. However, the knowledge of solely this variable can be
misleading for investors assessments, especially with regard to the
effective level of costs and to the relationship existing between risks
and potential investment performances. For a given maturity, prod-
ucts which have the same price at maturity tend to be perceived as
equivalent, while in a set of products with different prices there is a
natural tendency for the average investor to believe that the cheapest
is the one with the lowest price since they are not used to considering
the financial engineering of the products.
In order to ensure an objective and meaningful comparison across
products in terms of costs, risks and profitability, price must be split
according to criteria aimed at highlighting the relative weight of
its two main components: the fair value and the costs, namely the
same quantities taken as reference points by institutional operators
in deciding whether to take or unwind a position in a given financial
product and in what measure.
The fair price is the expected value, under the risk-neutral mea-
sure, of the products future cashflows, discounted at the risk-free
rate. Hence, this component gives a first indication of the products
ability to create value for the buyer, while the difference between
price and fair value identifies the whole costs they pay.
In the first pillar, the unbundling of the price is reported inside
the so-called financial investment table. As explained in Section 2.2,

13

i i

i i
i i

minenna 2011/9/5 13:53 page 14 #46


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

if the price paid by the investor is set at a value of 100, this table
illustrates the relative contributions of the fair value and of all costs
applied.
Should the price and the fair value coincide, then the reward given
to the investor to compensate for the product risks is in line with
that expressed (even in an implicit way) by the market. Otherwise, a
price higher than the fair value would signal that a part of the reward
that the market expects in order to take those risks is withheld by
the issuer as compensation for their expenses or, more generally, as
their profit.
The unbundling of the price into the two above-mentioned com-
ponents therefore offers important information on the costs of the
product or, equivalently, on the fairness of its riskreturn profile.
However, it does not allow for a full understanding of the important
elements of the financial structure of many products.
Unlike shares, many non-equity products (especially those pursu-
ing a target return) exhibit non-elementary financial structures, typ-
ically due to the presence of one or more derivative components that
trigger significant exposures to market and credit risks. Moreover,
the specific way in which these components are combined to obtain a
unique product is often counter-intuitive, if not completely hidden,
so that it becomes quite difficult for retail investors to understand
how their risks can concretely affect the payoff of the investment.
These aspects suggest supplementing the financial investment
table with a richer informative set. This can be achieved both by
increasing the level of detail of this table and by completing it
with suitable indicators of the degree of uncertainty that affects the
potential performances of the product.
With regard to the first task, attention must be paid to identify-
ing information which could really meet investors needs. From this
perspective it is evident that it would be unnecessary (or even con-
fusing) for the average investor to receive a technical representation
of all of the different components which combine to determine the
fair value and the possible payoffs of a given product. Instead, as
explained in more detail in Section 2.3.1, a more reasonable alter-
native is to provide a breakdown of the fair value into its risk-free
component and its risky component. Such a representation can be
easily obtained from an application of the portfolio replication prin-
ciple, and it has the advantage of indicating to the investor how

14

i i

i i
i i

minenna 2011/9/5 13:53 page 15 #47


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

much of the product is similar to a risk-free security and how much


is the value of the bet inherent in the risks of the investment.
With regard to the second task, the justification for this additional
information with respect to the financial investment table is in the
intrinsic limit of figures in this table. In fact, the fair value is by
definition a synthetic value of the risk-neutral density of the possi-
ble final payoffs of a product. Consequently, it loses many key ele-
ments regarding the riskiness and the profitability of the investment
which are contained in the above-stated probability density func-
tion. There exists an infinity of risk-neutral densities which share
the same (discounted) expected value but which can be substantially
different, reflecting riskreturn profiles that are quite heterogeneous
and, hence, not fungible for the needs of any single investor.
As a consequence, for return-target products whose financial
structures are often non-linear with respect to the underlying assets,
the financial investment table should be supplemented by synthetic
indicators of the potential returns and of their variability. Without
overburdening investors with a picture of the full risk-neutral den-
sity, such further indicators should be defined in order to represent in
an objective and clear way the key features characterising the shape
of this density which are of concrete interest for investors.
As explained in Sections 2.3.22.3.4, the solution adopted by the
first pillar is the table of probabilistic scenarios: the risk-neutral
density of the products final payoffs is partitioned into few mutu-
ally exclusive macro-events which stress the likelihood of achiev-
ing levels of returns that are particularly meaningful from the
investors perspective, since they invoke concepts also quite familiar
to non-experts, such as experiencing a loss (ie, a negative return) or
performing either like a risk-free asset or better or worse.
For each single macro-event, the table of probabilistic scenarios
conveys the corresponding probability as deriving from the risk-
neutral density, so that investors can easily distinguish the riskiness
and the potential performances of the different products. It is worth
noting that, compared with deterministic performance indicators
(like payoff diagrams, or the so-called what-if approach or the clas-
sical internal rate of return), the probabilistic solution adopted by
the first pillar is able to provide investors with synthetic informa-
tion about the level of dispersion, and, hence, of riskiness, that is
associated with the possible outcomes of the financial investment.

15

i i

i i
i i

minenna 2011/9/5 13:53 page 16 #48


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

In fact, payoff diagrams offer a static picture of what could happen at


the end of the time horizon as a deterministic function of the possible
underlying values; what-if solutions use randomly chosen point esti-
mates of a limited number of events without caring about the shape
of the risk-neutral density function, while the internal rate of return
is substantially an average annualised return determined by consid-
ering only a portion of the density and excluding the likelihood and
the severity of extreme events located in its tails.
However, if payoff diagrams and the what-if approach are incom-
plete, the internal rate of return can become a valid performance
indicator as soon as the risk-neutral density of a non-equity product
is regular enough. Indeed, in these cases this average indicator is
an acceptable proxy of the behaviour of the entire density, so that it
can replace the probabilistic scenarios without any loss of significant
information for investors.

2.1 THE RISK-NEUTRAL DENSITY OF A NON-EQUITY


PRODUCT
The value of any non-equity product can be represented, over the
period comprised by its time horizon, by a specific stochastic process
denoted by {St }t[0,T ] . For t = T, the final value of the product, ie,
ST , is a random variable whose risk-neutral density is the raw data
to be analysed in order to obtain both the financial investment table
and the table of probabilistic scenarios.
The adoption of the risk-neutral measure Q represents the basic
methodological requirement in order to ensure that information con-
veyed by the first pillar of the risk-based approach is objective,
meaningful and also consistent both intrinsically (ie, across the var-
ious indicators it encloses) and with respect to the message pro-
vided by the second and the third pillars of the approach, which
are also developed under the same measure. It is only under this
measure that any arbitrary assumption on the future evolution of
the market variables is discarded, allowing an effective compara-
bility across the fair prices of different products and across their
potential performances and the associated variability. This comes
directly from the fact that the risk-neutral measure is the only one
consistent with the no-arbitrage principle which, in fact, provides
the connection between the fair value of any given contingent claim

16

i i

i i
i i

minenna 2011/9/5 13:53 page 17 #49


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

with a time horizon T and the risk-neutral probability density func-


tion of the possible final values of the contingent claim at time T.
This is also the reason why market practitioners make use, in their
business, of pricing and hedging models defined under the stated
measure.
Sometimes, especially for elementary and short-term products,
the risk-neutral density of ST has a closed form, but in general
terms it can always be determined through Monte Carlo simulation
techniques (Glasserman 2004).
The choice of the modelling assumptions to be adopted in order
to properly simulate the stochastic processes describing the time
evolution of the financial variables affecting the value of a given
non-equity product has to be driven by the financial engineering
characterising the product itself.
Most of the stochastic models used to describe the above processes
are defined in continuous time and then suitably discretised to per-
form the simulations. The preference for continuous-time models
stems from their greater flexibility (also in computational terms),
since, also in the case of quite complex products whose payoffs
depend on specific quantitative algorithms and are exposed to a mul-
tiplicity of risk factors, they allow a description of the dynamics of
the variables of interest and the ways in which they affect the value of
the product over time. With regard to the time step of the simulation,
it should be reasonably short and close to the common continuous-
time modelling assumptions. Weekly or daily discretisation schemes
are fine; the latter also have the benefit of preserving consistency
with the framework of Chapter 3 when calibrating the optimal grid
of volatility intervals required to identify the degree of risk of the
investment and to detect migration phenomena for this indicator.
From a technical point of view, the adoption of the risk neutral
probability measure is obtained by properly inserting the simulated
trajectories of the short risk-free rate into the dynamics of the pro-
cess {St }t[0,T ] , in either a direct or indirect way, depending on the
characteristics of the product. For instance, where the value of the
product depends, among other things, on the behaviour of a share
or an equity index, the trajectories of these underlying assets must
be built, as is well known, by inserting into their drift component,
at any step of the simulation, the value of the short risk-free rate
obtained corresponding to the same time step.

17

i i

i i
i i

minenna 2011/9/5 13:53 page 18 #50


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Risk-target products and benchmark products are typically easy


to model, as they usually exhibit a direct dependence on the portfolio
of underlying assets; they can be accurately represented by means of
common stochastic differential equations, such as geometric Brown-
ian motion (Minenna 2006), possibly slightly revised to reflect fea-
tures connected to the management style adopted or (if necessary)
to the stochastic term structure of the benchmarks volatility.
Return-target products, on the other hand, are in all respects
contingent claims, that is, products whose payoff structures work
over a specific time horizon and are linked (often in a non-linear
way) to underlying assets or reference values, according to spe-
cific formulas and subject to the fulfilment of precise conditions.
This implies that stochastic models used to describe the possi-
ble patterns of these products over time must carefully consider
all relevant risk factors and the particular way in which, depend-
ing on financial engineering choices, these factors can affect the
future cashflows of the investment until the expiry of its time hori-
zon. Parameters and variables associated with different risk factors
have to be properly calibrated through estimates based on current
market data and by taking care of their consistency with the fea-
tures of any single product and with the reality of the reference
markets.
Clearly, since most non-equity products have a time horizon
longer than one year, variables like interest rates, credit spreads,
volatilities and correlations cannot be assumed to be constant; mod-
els used to perform simulations must therefore include a suitable
set of stochastic differential equations in order to cope with this ele-
ment of complexity. The same requirements are used in the mod-
els developed by market practitioners to obtain the most accurate
assessments of the value of any product which they want to sell or
to include in their proprietary portfolios.
Risk-neutral simulations must also consider the size and the time
schedule of periodic or one-off amounts paid out to the investor
or invested in other financial assets during the implicit time hori-
zon of the product; and similarly for costs (both one-off and run-
ning) incurred over this horizon. Indeed, apart from explicit up-front
charges, whose amount at the subscription date is a known constant
to be immediately subtracted from the price, the discovery process
of the fair value requires proper estimation of the negative impact of

18

i i

i i
i i

minenna 2011/9/5 13:53 page 19 #51


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

any cost item (whose amount is often random) applied during the
life of the product.
Simulations must also suitably deal with products including path-
dependent features which can trigger an early redemption (like
callable or puttable bonds, American or Bermudan options (Hull
2009)), the coupon size or existence or the switch to another pay-
off structure (eg, flipping the coupons from fixed to floating or vice
versa).
In addition, when the product combines, even in a synthetic way,
two or more components, it may be necessary to first determine
separately the trajectories of the different components and then to
put them together by paying attention to the sign and the intensity
of their correlations and to the manner by which they conform to
the payoff structure of the investment.
A very simple example of products resulting from the (synthetic)
packaging of different components are plain-vanilla bonds exposed
to the credit risk of a given reference entity. Indeed, as explained
in more detail in Section 2.3.1, they can be replicated by combin-
ing a risk-free bond with a short position in a credit derivative on
the same reference entity, usually coinciding with the issuer of the
risky bond.
The risk-neutral density of the final value ST of a defaultable bond
can be obtained from a Monte Carlo simulation of the interest rate
term structure and of the random variable representing the default
time of the issuer. The models underlying the simulation of the inter-
est rates and the credit event dynamics need to be properly calibrated
in order to reflect the market conditions at the time of issue.
In the simple case of defaultable plain-vanilla bonds, the density
function of ST exhibits two modes: the first represents the trajectories
in which the default events occur and, therefore, this mode falls in the
region corresponding to negative returns and is close to the recovery
value; the second mode represents the trajectories not affected by a
credit event, and consequently it lies in the area of positive returns
with a placement that depends on the specific coupon structure of the
bond and, hence, on the level of the spread, if any, over the risk-free
return paid to investors to compensate the credit risk exposure.
The procedure described above for defaultable bonds can easily
be extended to simulate the dynamics of whatever non-equity prod-
uct exposed to the credit risk of one or more reference entities (eg,

19

i i

i i
i i

minenna 2011/9/5 13:53 page 20 #52


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

issuers of plain-vanilla or structured bonds or index-linked financial-


insurance policies, guarantors of CPPI or OBPI funds1 or unit-linked
financial-insurance policies, etc), and it allows the integration of the
credit risk inside the representation of all other material risks of the
investment. More specifically, the possibility of experiencing losses
due to the occurrence of credit events is one of the variables which
can entail a multimodality in the risk-neutral density function of ST .
Like credit risk, exposures to other risk sources also affect the final
payoffs of a non-equity product. It follows that the resulting risk-
neutral density curves can take a wide variety of shapes, depending
on how the different risks are embedded in the financial structure of
the product.
When this density is regular enough (as usually happens for
elementary bonds with marginal credit risk exposures) the finan-
cial investment table, supplemented by simple deterministic perfor-
mance indicators, provides investors with sufficient information to
understand how expensive the product is and how its risks can con-
cretely affect the returns achievable at the end of the recommended
time horizon.
By contrast, in the case of irregular densities, the financial invest-
ment table still maintains an informative meaning, but the nature
of figures in this table (that are expected discounted values) leads
to an excessive loss of information on the performance risk, ie, the
ability of the product to create more or less likely added value for
the investor. At the same time, deterministic performance indica-
tors (if they exist) are not adequate to counter this information
loss, signalling that for these products the informative content of
the risk-neutral density needs to be synthesised through suitable
probabilistic indicators.

2.2 PRICE UNBUNDLING VIA THE FINANCIAL INVESTMENT


TABLE
Any non-equity product with a time horizon T is a contingent claim
whose payoff at T depends on the value of one or more underlying
assets according to a specific formula. Hence, the fair value of non-
equity products must be determined by applying the contingent
claim evaluation theory under the risk-neutral measure.
This theory says that, assuming market completeness, there
always exists some self-financing trading strategy whose value

20

i i

i i
i i

minenna 2011/9/5 13:53 page 21 #53


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

at time T is equal to that of a given contingent claim. Such a


trading strategy identifies the so-called replicating portfolio for the
contingent claim.
By the further assumption of absence of any arbitrage opportunity
which excludes no-cost profits (so-called free lunches) through the
sale of a contingent claim and the contextual purchase of its repli-
cating portfolio or vice versa, it also follows that at any time t < T
the values of the contingent claim and of its replicating portfolio
coincide.
Under the above assumptions, the Second Fundamental Theo-
rem of Asset Pricing (Harrison and Pliska 1981) ensures that there
exists a unique measure, Q, called the risk-neutral measure, under
which any discounted price process is martingale (Minenna 2006);
in particular, with respect to the time horizon T of the product, the
following equality holds
 
ST
S0 = EQ (2.1)
BT

where

S0 is the fair value of the contingent claim;

ST is the random variable corresponding to the stochastic


process of the value of the contingent claim {St }t[0,T ] ;

BT is the random variable corresponding to the stochastic


process of the risk-free asset {Bt }t[0,T ] (also called the cash-
account process or risk-neutral numraire) considered at time
t = T and defined by the following function of the stochastic
process2 of the short rate {rt }t0
 T 
BT : = exp rs ds (2.2)
0

As observed in the previous sections, the risk-neutral measure


ensures the objectivity and, hence, the meaningfulness and the com-
parability of information; indeed, Equation 2.1 clearly proves that
the fair value of any non-equity product is the discounted expected
value of its future cashflows until maturity, where this expectation

21

i i

i i
i i

minenna 2011/9/5 13:53 page 22 #54


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

is taken under the risk-neutral measure, that is, without making any
assumption on investors preferences.
Equation 2.1 also confirms that the two ingredients for obtaining
S0 are the risk-neutral density of the products final values (ie, ST )
and that of the final values of an investment in the risk-free asset
(ie, BT ).
The first density can be obtained through Monte Carlo simulations
of the key variables of the non-equity product, after having suitably
modelled them. The risk-neutral density of BT is determined in a
similar way, provided that the parameters and the variables shared
by the two simulations are assigned the same values.
Once the fair value is known, the unbundling of the full initial
price of the non-equity product is straightforward. In particular, the
difference between price (set equal to 100) and fair value (expressed
in percentage terms) represents the whole percentage costs of the
investment. Apart from explicit up-front charges (expressed in per-
centage terms), this quantity is the discounted expected value of the
total costs that will be borne by investors over the time horizon of
the product under the risk-neutral measure.
This representation includes all costs regardless of the time and
the way (either explicit or implicit) in which they are charged.
The aggregation in a unique item of the financial investment table
enhances comparability of products by offering a streamlined indi-
cation of how expensive they are if retained until the end of the
recommended time horizon.
1It is worth observing that it is only for return-target products that
the fair value is uniquely determined by calculating Equation 2.1 at
a certain time T: their payoffs depend, often non-linearly, on some
underlying assets or reference values (and, hence, on the underly-
ing risks) according to specific algorithms ceasing at T. This means
that at time T only (which, as explained in Chapter 4, identifies the
implicit time horizon of the investment) the definition of the payoff
structure is completed by a proper boundary condition, which gives
to the risk-neutral density of ST a precise shape which is peculiar to
that maturity.
Illiquid return-target products are sometimes assisted by some
services of liquidability enhancement aimed at making the invest-
ment more appealing and allowing earlier redemption under par-
tially secure conditions. These services can be provided either by a

22

i i

i i
i i

minenna 2011/9/5 13:53 page 23 #55


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

direct intervention to the products engineering or by introducing


specific micro-structural rules in the trading venue where it is
possible to disinvest.3
A typical liquidability enhancement solution consists in locking
the credit spread used to determine the fair value of a bond on the
secondary venue at a value set at a time close to the issue date and
into taking the commitment to buy back the bond at any early date
decided by the investor. Technically, the value of such a service is the
same as that of a synthetic credit derivative which hedges investors
against deteriorations in the creditworthiness of the issuer that could
occur during the life of the product. This hedge is generally offered
in exchange for lower coupons and it must be properly disclosed
inside the financial investment table.
In this approach, a viable representation is to report in the table
the fair value of the product both net and gross of the services fair
value, which also has to be made explicit inside the table. The net
fair value represents the expected discounted value of the pure pay-
offs structure of the product, while the gross fair value (ie, including
the value of the liquidability enhancement) also depends on the spe-
cific micro-structural conditions of the trading venue available if the
investor would like to exit before maturity. In this way, investors
interested in the possibility of early redemption will consider the
liquidability support as part of the products value, while buy-and-
hold investors will easily interpret the value of this service as a pure
cost item.
For risk-target or benchmark products, the financial investment
table takes a simplified form: it does not include the fair value but
reports only the information about the discounted value, on average,
of the whole costs charged (even indirectly or implicitly) over the
recommended time horizon and expressed in percentage terms on
an initial outlay of 100.
The reason is that products pursuing a target risk or managed with
respect to a specific market segment are time-varying portfolios of
assets and, consequently, their fair value is the weighted average of
the market values of the invested assets. No alternative evaluation
procedures are admissible, as they are likely to lead to values differ-
ent from those expressed by the market. In fact, it is clear that (unlike
what happens in return-target products) the financial engineering
of these products does not identify an implicit time horizon for the

23

i i

i i
i i

minenna 2011/9/5 13:53 page 24 #56


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

investment: these products usually exhibit linear payoff structures


which lead to a sort of equivalence between the prospective returns
at any future date. In other words, the absence of a boundary con-
dition effectively binding at a given future time implies that there is
no single eligible time to be inserted in the formula of Equation 2.1
in order to derive the fair value.
For these arguments, it is preferable to avoid information which
would be confusing for investors. It seems instead more reasonable
to limit the content of the financial investment table to the present
value of the costs faced if investors use the recommended time hori-
zon by having in mind its real meaning, that is an objective advice
for a prudential exit strategy. And in fact, as explained in more
detail in Chapter 4, for these products the recommended time hori-
zon is exactly the minimum holding period, determined as the time
required to be reasonably confident that the costs of the investment
will be fully recovered.

2.3 FIRST PILLAR AND NON-ELEMENTARY PRODUCTS


When non-equity products feature a non-elementary financial engi-
neering, the unbundling of the initial price between fair value and
total costs continues to play an important informative role, but
it is no longer sufficient to provide investors with an adequate
understanding of some key relations connecting risks and potential
returns.
Complex financial structures often recur in return-target prod-
ucts, mainly due to the presence of explicit or implicit derivative
components.
Derivatives can either coincide with non-equity products (such
as certificates or covered warrants) or be wrapped in their financial
engineering (for example, structured bonds, OBPI funds or unit-
linked policies, etc). In these cases the derivative is explicitly recog-
nised as part of the investments structure which essentially aims
at proposing to investors a bet on the future behaviour of specific
assets or reference values like shares, financial indices, interest rates,
inflation, exchange rates and commodities.
Complexity of these products arises from investors difficulty to
immediately get a proper assessment of the possible effects of all
variables involved and a good comprehension of the algorithms

24

i i

i i
i i

minenna 2011/9/5 13:53 page 25 #57


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

behind the results attainable over the period spanned by their time
horizon.
In the absence of a suitable informative set, non-professional
investors tend not to care about the volatility of financial variables
underlying the products and their assessments typically rely on
the implicit assumption that spot values of these variables will not
change over the life of the investment.
Moreover, the non-linear way in which many derivatives are
nested in the engineering of the product can be very sophisticated
and counter-intuitive, hence they merit stronger disclosure tools to
prevent investors from taking their decision on the basis of biased
beliefs.
Products embedding implicit derivatives can be even more insid-
ious since their influence on the riskiness and the opportunities of
the investment is hard to be correctly appreciated by investors. A
meaningful example is once again plain-vanilla bonds with signifi-
cant credit risk exposures. The simplicity of their payoffs structure
attracts investors who commonly sort the different alternatives just
by looking at the size of the coupons. This typical investor behaviour,
however, has the inconvenience of implicitly assuming that the
coupon size is proportional to the material credit risk of the security
(which is not necessarily true) or, even worse, of completely disre-
garding this risk. As explained in Section 2.1, despite their appar-
ent simplicity, the risk-neutral density function of these bonds takes
a bimodal shape reflecting how the likelihood of the credit event
strongly affects either the probability or the severity of losses or both.
Information conveyed by the second pillar of the risk-based
approach (namely, the degree of risk; Chapter 3) is a useful warn-
ing of the overall riskiness of non-elementary products, and the
financial investment table aids understanding of how expensive
they are. However, in order to support enlightened investment deci-
sions, these products require supplementary information highlight-
ing the contribution given to the fair value by the risk-free compo-
nent and the risky component, respectively, and how the various
risks involved can affect the final values achievable by investors.
Inside the first pillar, a further analysis of the risk-neutral density
of ST satisfies the first requirement by increasing the informative
detail of the financial investment table, and the second by using a
synthetic table of probabilistic scenarios.

25

i i

i i
i i

minenna 2011/9/5 13:53 page 26 #58


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

2.3.1 Increasing the detail of the financial investment table


Assuming markets completeness and no-arbitrage opportunities,
any financial asset can be replicated, under the risk-neutral measure,
by some well-defined portfolio of other assets.
A well-known example demonstrates that an option is equivalent
to a portfolio composed of precise quantities of the risk-free asset (see
Equation 2.2) and of the underlying asset. Moreover, by exploiting
this equivalence it is possible to obtain the replicating portfolio for
the underlying asset in terms of well-defined quantities of the risk-
free asset and of the option.
In general terms, the replicating portfolio is not unique: the engi-
neering of a given non-equity product (specifically, if pursuing a
target return) admits alternative representations that also depend
on the detail used to identify its so-called elementary components.
A classical example in the literature is non-defaultable fixed-cou-
pon bonds, which can be seen either as portfolios of zero-coupon
bonds with different notional amounts, or as the combination of a
floating rate bond with an interest rate swap (the latter realising the
switching from variable to fixed coupons).
From the point of view of the disclosure provided to investors,
these degrees of freedom make the comparison across different prod-
ucts difficult, as issuers can offer different representations of the
same financial structure. In addition, they usually exploit the non-
uniqueness of the replicating portfolio to show only explicit deriva-
tive components, failing to highlight other derivatives comprised in
the product, as in the case of defaultable bonds.
In fact, these bonds are typically represented as they were exposed
only to the movements of the yield curve or, when they are struc-
tured, by separating the bond component from the derivative on the
specific underlying asset, which carries a dependence on some fur-
ther source of market risk. Most of the time, no evidence of the credit
derivative included in these products is given due to the possibility
that the issuer will not be able to live up to their commitments to the
investors, nonetheless, as shown by Merton (1974), this possibility
indicates that a credit derivative is implicitly sold by bond-holders
to the issuer, who in this way achieves a sort of insurance against
their own credit risk.
The need for a clear, exhaustive and comparable investors
understanding suggests defining a uniform criterion to represent

26

i i

i i
i i

minenna 2011/9/5 13:53 page 27 #59


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

the different components of sophisticated non-equity products. For


this reason, information provided by the financial investment table
is increased by using the portfolio replication principle in order to
highlight the two key contributors to the fair value of such products:
the risk-free component and the risky one.
This criterion is valid for any non-elementary product and it
leaves out any issuers discretion as well as the technical terms
typical of the market jargon and quite unfamiliar to retail investors.
In order to obtain such decomposition it is useful to give the
following preliminary definition.

Definition 2.1. Consider a non-elementary return-target product


expiring at time T such that

the stochastic process {St }t[0,T ] denotes the dynamics of the


product over time and, in particular, S0 is the fair value
determined according to Equation 2.1,

t1 , t2 , . . . , tn = T are the payment dates identified by the specific


financial structure of the product.

Then, the risk-free security associated with this product is defined


as the non-defaultable floating-coupon bond such that

the notional capital is equal to S0 ,

at the payment dates t1 , t2 , . . . , tn = T the following cashflows,


expressed in percentage of S0 , are paid

Rti1 ,ti (ti ti1 ) if i < n (2.3)


1 + Rti1 ,ti (ti ti1 ) if i = n (2.4)

where t0 = 0 and Rti1 ,ti denotes the annualised risk-free rate


set at time ti1 to be paid at time ti .4

The stochastic process {Xt }t[0,T ] denotes the dynamics of the


risk-free security over time. In particular, X0 is the fair value of
the risk-free security determined according to Equation 2.1.

By using Definition 2.1 it is possible to state the following theorem,


which identifies a specific replicating portfolio for the non-equity
product of interest.

27

i i

i i
i i

minenna 2011/9/5 13:53 page 28 #60


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Theorem 2.2. Any non-elementary return-target product can be


replicated by a portfolio composed of the associated risk-free secu-
rity and of a zero-value swap which transforms the cashflow struc-
ture of the risk-free security into the cashflow structure of the product
itself, ie, denoting by {swapt }t[0,T ] the value process of the swap
St = Xt + swapt for all t [0, T ] (2.5)
and
swap0 = 0 (2.6)

Proof The proof is straightforward by observing that Definition 2.1


implies that the risk-free security associated with a specific non-
elementary return-target product has the same fair value as the
product itself, ie
X0 = S 0 (2.7)

In the traditional logic of decomposition of the fair value into a


bond-like component and a derivative component, the replicating
portfolio of Equation 2.5 considered per se does not offer any useful
information to investors. In fact, the nullity of swap at the issue
date, expressed by Equation 2.6, leads to a trivial unbundling of
S0 (ie, 100% in the bond-like component and 0% in the derivative
component).
Instead, in the context of the split between the risk-free and the
risky component, the swap assumes a different meaning. In fact, by
construction it is a financial instrument that has two legs, and with
an appropriate restructuring of these legs it is possible to maintain
an initial value of zero and, at the same time, uniquely qualify the
risky component of the non-equity product.
Intuitively, this restructuring of the two legs is functional to ensure
that their values at time T represent the cases where the non-equity
product outperforms the corresponding risk-free security and where
it performs worse than that security, respectively. The discounted
expected values of the two legs are equal, except for the sign, hence
guaranteeing on the one hand the zero value of the swap at the issue
date and on the other hand an unambiguous estimate of the unique
risky component of S0 . Finally, in this way it is possible to easily
determine the value of the risk-free component of the non-equity
product.

28

i i

i i
i i

minenna 2011/9/5 13:53 page 29 #61


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

From an operational point of view, the aforementioned restruc-


turing of the swap can be done through a technique based on the
trajectory-by-trajectory comparison5 between the final value of the
product and that of the corresponding risk-free security, and this
comparison also allows for the immediate identification of the two
re-synthesised legs of the swap.
Formally, the first step to make the breakdown of the fair value in
accordance with the above procedure is given by the next propo-
sition, which splits the final value of the swap into the differ-
ence between two random variables that have the same discounted
expected value.
Proposition 2.3. The random variable swapT , representing the final
value of the synthetic swap considered by Theorem 2.2, can be
decomposed as follows

swapT = swap+
T swapT (2.8)

where
swap if swapT > 0
T
swap+
T = (2.9)
0 if swapT  0
and
0 if swapT > 0
swap
T = (2.10)
swap if swapT  0
T

Moreover, the following equality holds


   
swap+ swap
EQ T
= EQ T
(2.11)
BT BT
Proof Equation 2.8 immediately follows from the standard repre-
sentation of a random variable as the difference of its positive
and negative parts, which are given by Equations 2.9 and 2.10,
respectively.
By the standard risk-neutral pricing evaluation, Equation 2.6 can
be rewritten as  
Q swapT
E =0 (2.12)
BT
Substituting the right-hand side of Equation 2.8 into the left-hand
side of Equation 2.12 gives
 
swap+
T swapT
EQ =0
BT

29

i i

i i
i i

minenna 2011/9/5 13:53 page 30 #62


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

that is, by the linearity of the expected value


 +  
Q swapT Q swapT
E E =0
BT BT
and hence Equation 2.11 holds.

Proposition 2.3 gives a clear explanation of the nullity, at time 0,


of the swap defined in Theorem 2.2. In fact, the equality between
the discounted expected values of the random variables swap+ T and
swap T indicates that, on average, the states of the world where the
non-equity product beats its corresponding risk-free security at time
T are balanced by the states of the world where the opposite holds.
It follows that the discounted expected value of any of these two
variables is an estimate of the value of the risky component of S0 , as
stated by the next definition.
risky
Definition 2.4. The risky component, S0 , of the fair value of a
given non-elementary return-target product is defined as
 
risky swap+
S0 = EQ T
(2.13)
BT
or equivalently as
 
risky swap
S0 = EQ T
(2.14)
BT
Definition 2.4 is fully consistent with the framework established
by Definition 2.1 and Theorem 2.2. In particular, according to Defini-
tion 2.4, the risky component of a non-defaultable floating-coupon
bond like the one considered in Definition 2.1 is zero.
At this point, the estimate of the risk-free component, Srf
0 , of any
given product is easily provided by the following definition.
risky
Definition 2.5. Let S0 and S0 be the fair value of a non-elementary
return-target product and its risky component, respectively. Then,
the risk-free component, Srf
0 , of the fair value of the product is defined
as
risky
S0rf = S0 S0 (2.15)

2.3.2 The table of probabilistic scenarios


The fair value is, by definition, a synthetic value of the risk-neutral
density of the possible final payoffs; therefore, it ignores informa-
tion provided by moments of order higher than one and it does not

30

i i

i i
i i

minenna 2011/9/5 13:53 page 31 #63


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.1 Comparison of the densities of two non-equity products with


the same fair value
3.0
Risk-free floating-coupon bond
2.5 Defaultable fixed-coupon bond

2.0
104

1.5

1.0

0.5

0
0 20 40 60 80 100 120 140 160 180 200 220 240

allow for the appreciation of the degree of randomness characteris-


ing the performances of a given product. In fact, the same discounted
expected value may be obtained by an infinite number of density
functions, even with very different shapes. These shapes completely
qualify the peculiar relation between riskiness and profitability of
any single product that represents the fundamental information that
investors take as a reference point to assess whether an investment
proposal effectively meets their needs.
The inherent conciseness of the fair value becomes particularly rel-
evant when the risk neutral density of the product is quite irregular.
Investors naturally tend to figure out the final performance by using,
as its proxy, the fair value compounded at the risk-free rate until the
expiry of the time horizon. Such reasoning is somewhat meaningful
when the density is regular, otherwise it is biased since it implicitly
assumes that an entire distribution can be exhaustively depicted by
only considering the first moment, which is generally false.
Figure 2.1 clarifies this point by comparing the risk-neutral densi-
ties of two non-equity products with identical fair values (both equal
to 100) and expiry dates but characterised by very heterogeneous
riskreturn profiles. The first security is a risk-free floating-coupon
bond, while the second is a subordinated bond which pays fair
fixed coupons, meaning that it gives an extra return over the risk-
free rate in line with the one required by the market to compensate
for the underlying credit risk exposure.

31

i i

i i
i i

minenna 2011/9/5 13:53 page 32 #64


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

The difference between the two densities is striking and it is


only partly signalled by the fact that the risky component (see Sec-
tion 2.3.1) is 0 for the first product and 15.7 for the second one. In
particular, the former density is fully contained in the region of pay-
offs higher than 100, meaning that the buyer of the security will
achieve only positive returns. On the other hand, the other density is
bimodal, as expected. Consequently, should some credit event occur
over the life of the product, investors would receive only the recovery
value (set equal to 20% of the notional capital) compounded at the
risk-free rate until maturity, which explains the dispersion above the
minimum threshold of 20 observed in the region of payoffs lower
than 100. Otherwise, if no credit event occurs, then investors will
benefit from positive returns that on average are higher than returns
of the risk-free asset.
Comparing the two densities, it is self-evident that, despite hav-
ing the same fair value, these two products carry different levels of
riskiness and profitability which must be disclosed since they are
suitable for investors who have very different utility functions.
The same argument applies to many non-equity return-target
products featuring non-elementary payoff structures that translate
into quite uneven density functions, as in the case of complex finan-
cial engineering (such as in the presence of one or more deriva-
tive components, also implicitly incorporated in the structure) but
also in the case of unsophisticated products that present significant
exposures to certain risk factors.
For these products the existence of a time horizon implicit in
their financial engineering ensures the applicability of the con-
tingent claim evaluation theory under the risk-neutral measure
(see Section 2.2). By relying on the risk-neutral density at time
T, this theory can be usefully exploited to improve the quality of
the information conveyed to investors precisely in order to give a
more comprehensive representation of the riskreturn profile of the
product.
This is partially achieved by enriching the informative set pro-
vided to investors with increased detail offered inside the financial
investment table, as explained in the previous section. However, as
long as the figures displayed in that table are only valid on aver-
age, there still remains a huge information gap, which requires the
development of further indicators that, starting from the risk-neutral

32

i i

i i
i i

minenna 2011/9/5 13:53 page 33 #65


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.2 Densities of the same non-equity product obtained with two
different models

3.0

3.5 HullWhite
CoxIngersollRoss
3.0

2.5
104

2.0

1.5

1.0

0.5

0
0 20 40 60 80 100 120 140 160 180 200 220 240

density, provide a clear and objective illustration of the levels of the


possible performances and of their variability.
Ideally, the maximum transparency on these topics is offered pre-
cisely by the availability of the risk-neutral density of the final pay-
offs of the product. However, it is unlikely that retail investors have
the statistical and financial background necessary to autonomously
handle an entire probability distribution.
In addition, most of the time, the presence of the so-called model
risk must also be considered, which implies that this distribution is
not unique. In fact, for any given product, once the family of models
that are suitable for dealing with the associated pricing problem have
been identified, the different models of this family unfortunately
lead to different risk-neutral densities.
Figure 2.2 shows the risk-neutral density of the same subordinated
floating-coupon bond as obtained by using two different stochastic
term structure models developed respectively by Hull and White
(1990) and by Cox et al (1985).
Although both models belong to the class of unifactorial affine
short-rate models, Figure 2.2 shows that the two densities do not
coincide. In this specific case the reason is that, apart from the credit
risk exposure, the cashflows of the bond are strongly dependent on
the time evolution of the interest rates, which varies between the
two models.

33

i i

i i
i i

minenna 2011/9/5 13:53 page 34 #66


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

The above arguments indicate that the raw disclosure of the risk-
neutral density is not a viable way to fill the above-mentioned
information gap, since the richness and flexibility of information
connected with such density would come at the cost of an increased
complexity of comprehension for the average investor.
What is really needed is a method able to efficiently exploit the
information embedded in the risk-neutral density and to smooth the
differences between densities generated by different models.
The solution envisaged by the first pillar is the so-called table of
probabilistic scenarios (hereafter also referred to as the probability
table), which summarises the salient features of the variability char-
acterising the potential performance of the investment by partition-
ing the density into a few mutually exclusive macro-events which
are of concrete interest for the investor.
In general terms, this solution relies on the well-known princi-
ple of reduction in granularity, which turns out to be very helpful
in diminishing the relevance of the model risk. Returning to the
example of Figure 2.2, an elementary partition of the two densities
displayed there in two complementary events gives very similar
probabilities. For instance, by defining the two events with respect
to the issue price of 100, ie, the final value of the investment is lower
than the issue price and the final value of the investment is higher
than the issue price, the probabilities of these events are the same,
namely 26.8% for the first event and 73.2% for the second one.
Clearly, by choosing a different partition, the probabilities of a
given event under the two models could differ. And, in general
terms, the more irregular the financial structure of the product the
more likely it is that the same partition applied to densities obtained
from different models could exhibit some differences. But, typi-
cally they will not be significant, especially when the interest is in
capturing the key message conveyed by a specific partition.
From a technical point of view, there are infinitely many ways
to aggregate the elementary events of a probability distribution in
macro-events which are mutually exclusive; and, also, in principle,
any individual might want to know in depth specific subsets of the
risk-neutral density. However, the need for endowing all investors
with the same core information and to preserve the comparability
across products requires consideration of the same macro-events for
all non-elementary return-target products.

34

i i

i i
i i

minenna 2011/9/5 13:53 page 35 #67


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

In this context the most reasonable choice is to identify the macro-


events according to criteria apt to immediately disclose the perfor-
mance risk of a product, defined as its ability to create added value
for the investor with respect to the initial outlay per se and also to
the results achievable by taking an alternative investment decision.
In the case of non-elementary return-target products represent-
ing a direct financial investment, that is, an investment that does not
intervene to substitute or modify a pre-existing product or liability
held by the investor, the above alternative could be any non-equity
product. But, in the absence of a specific pre-existing position, the
choice of a particular product available on the market would be dis-
cretionary. Hence, it is necessary to look for an alternative investment
which minimises any arbitrary assumption on investors preferences
and, at the same time, is able to represent in a clear, immediate and
significant way how the specific risk factors and financial structure
of the non-equity product of interest will affect the payoffs that can
be obtained. From this perspective, the simplest choice is to compare
the risk-neutral density of the product with the density associated
with the investment of the same amount of money in a risk-free asset.
The latter is intended as an investment which, over the same time
horizon of the product and given an initial outlay equal to the issue
price of 100, pays a return equal to that accrued at the risk-free
rates of the currency area where the product is sold. As seen in Sec-
tion 2.2, in finance this process is also called risk-neutral numraire
and it is modelled through the stochastic process {Bt }t[0,T ] which
is governed by Equation 2.2, ie
 T 
BT : = exp rs ds
0

which reveals that the unique source of uncertainty for the risk-free
asset are the movements in the interest rates curve.6 Hence, the risk-
neutral density of the final values of an initial investment of 100
in the risk-free asset reproduces exactly the impact of interest rate
volatility on the returns of a financial investment, ensuring that the
comparison with the non-equity product highlights the influence of
the specific features that characterise such a product.
The partition technique used to make this comparison is the super-
imposition of the two densities. According to this technique, the
risk-neutral density of the product is partitioned with respect to
fixed thresholds which are exogenously identified depending on the

35

i i

i i
i i

minenna 2011/9/5 13:53 page 36 #68


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

point of zero return (ie, the final value of the investment is equal to
100) and on the risk-neutral density of the risk-free asset. This allows
the information connected with the second moment of the products
probability distribution to be highlighted, and, hence, the implica-
tions of the volatility of its returns on the payoffs that investors can
face at the end of the investment time horizon to be understood.
The full methodology adopted to determine the probability table
is explained in Section 2.3.3. The final output is a table which displays
the probabilities of four alternative macro-events and a synthetic
indicator of the final value of the investment associated with each of
them. For some products, depending on the specific shape of their
risk-neutral density function, this information set is supplemented
by further indicators in order to guarantee the complete investors
comprehension and a honest comparison with the risk-free asset.
Regardless of these technical aspects, it is clear that by prop-
erly exploiting the principle of reduction in granularity, the table
of probabilistic scenarios attains the goal of reducing the amount
of available data to be handled by the investor by preserving, at
the same time, the core of the additional information contained in
the risk-neutral probability density with respect to the fair value,
so that investors achieve a much better awareness of the overall
performance risk behind a non-elementary product.
In fact, although in general a higher fair value corresponds to a
greater profitability of the investment, this indication is only true
on average as stated above. On the other hand, the probability table
has the advantage of coming from the same risk-neutral density
used to determine the fair value and of preserving, at the same time,
the fundamental information related to the specific shape of this
distribution without being excessively exposed to the model risk.
In this way, fair values and time horizons being equal, it becomes
possible to understand the effect that the investment risks have on
investment performances, hence qualifying from time to time a safer
product (similar in substance to the risk-free asset) or, conversely, a
particularly daring product which, for example, combines a high
probability of obtaining results more desirable than the risk-free
asset with a non-negligible probability of receiving returns that are
positive but not competitive or even negative returns.
It is worth pointing out that, for products representing a direct
financial investment, the probabilistic scenarios do not allow for a

36

i i

i i
i i

minenna 2011/9/5 13:53 page 37 #69


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

point-to-point comparison with the risk-free asset, but rather offer a


comparison based on the analysis of the positioning of the products
density with respect to that of the risk-free alternative investment.
The main alternative methodology, consisting in carrying out the
comparison trajectory by trajectory, would have the inconvenience
of being quite difficult to disclose to retail investors, as it would
produce relative performance indicators, which are unfamiliar to
investors and require an excessive effort to be interpreted by them.
However, as illustrated in Section 2.3.4, the trajectory-by-trajec-
tory comparison turns out to be very useful when the product is
intended to replace a pre-existing investment (as in the case of
exchange public offerings or when an investor is willing to replace a
specific non-equity product they are holding), is a structured finan-
cial liability, or intervenes to modify the features of an existing
liability (like a mortgage).
Both the superimposition and the trajectory-by-trajectory tech-
niques require a subdivision of the density in a limited number of
events; these classes of events are obviously coarser grained, and do
not necessarily capture finer nuances related to different modelling
hypotheses. Nevertheless, as noted above, this reduction of granu-
larity is not a problem in itself, as practically any reasonable level of
information within this framework can be captured, while reducing
the level of complexity found in the comprehension of an abstract
object such as a probability distribution.
In fact, it has to be considered that, from a practical point of view,
even if, after the reduction in granularity, slight differences in the
probability scenarios under different models specifications persist,
usually they are not determinant variables in the investors selec-
tion process. In other words, a significance threshold in the retail
investors perception seems to exist, and this phenomenon makes
any differences of few percentage points substantially irrelevant.
A well-executed reduction in granularity eventually enables the
sharing of useful information between issuers and investors, thus
allowing efficient use of limited amounts of available data and help-
ing to overcome the information asymmetry that characterises the
markets of non-equity investment products.
Before presenting the details of the two methodologies, it is worth
clarifying how the contents of the table of probabilistic scenarios
should be interpreted. Numbers in this table support investors

37

i i

i i
i i

minenna 2011/9/5 13:53 page 38 #70


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

understanding of the more-or-less balanced equilibrium between


risks and opportunities of the product as they result from the
information available at the time of issue.
This means that they come from a prospective quantitative analy-
sis performed by using information available ex ante. The goal is to
provide investors with a useful tool to assess and compare differ-
ent investment proposals. But there is no claim about the fulfilment
of those numbers at the expiry of the time horizon. Nor may it be
otherwise, since (except in very unusual cases) each scenario has a
certain probability of occurrence. This clarification also allows the
exclusion of views that deem the representation of scenarios an addi-
tional source of liability for issuers. Although probabilistic scenarios
are more detailed and, therefore, more meaningful than the mere
indication of the fair value, it must always be remembered that both
result from the same raw information given by the risk-neutral
probability density of the final payoffs.

2.3.3 Methodology to build the table of probabilistic


scenarios
As argued in the previous section, in order to give a more complete
representation of the performance risk of non-elementary return-
target products, the first pillar requires partitioning of the risk-
neutral density of the final values of the investment by choosing
some relevant points on the x-axis that inherently define different
events of interest for the average investor.
In this perspective the thresholds are exogenously identified and
they depend on the point of zero return and on the risk-neutral
density of the risk-free asset.
In terms of final value of the investment, the point of zero return
corresponds to the issue price of the product, which, consistently
with the methodology behind the financial investment table (see
Section 2.2) is set equal to 100. As shown by Figure 2.3, this threshold
naturally defines the macro-event: the final value of the investment
is lower than the issue price and its complementary event the final
value of the investment is higher than the issue price in a simple,
effective and objective way.
The assessment of the probability of recovering at least the amount
paid for the product is of great significance for the investor and it is
of course assimilated to the cost-recovery event7 ; at the same time,

38

i i

i i
i i

minenna 2011/9/5 13:53 page 39 #71


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.3 Partition of the risk-neutral density of the non-equity product


with respect to the point of zero return

15,000

Final value lower than the issue price


Final value higher than the issue price

10,000

5,000

0
0 50 100 150 200 250 300 350 400

the information related to the presence of a credit risk exposure, or


to the failure of any given mechanism of protection or guarantee is
easily incorporated into the first of the two macro-events mentioned
above.
However, a good representation of the performance risk of a non-
equity product cannot be exhausted by the two macro-events lower
than the issue price and higher than the issue price. In fact, it is
important to measure this risk not only in absolute terms (ie, with
respect to the threshold of 100) but also with respect to the alterna-
tive investment of the same amount of money in the risk-free asset,
since, under the risk-neutral measure, Q, the concept of loss (gain)
is not identified by the point of zero return, but by the subset of the
products performances which are strictly lower (higher) than those
of the risk-free asset. The reason is that, in a risk-neutral world, the
time value of the money is measured by using a stochastic process
with an instantaneous rate of return equal to the risk-free one.
In this context it becomes appropriate to explore further partitions
of the macro-event the final value of the investment is higher than
the issue price.
Following the same line of reasoning that allowed the splitting
of the original probability density into two main macro-events, it
is possible to choose one or more thresholds directly related to the
final values of the risk-free asset.

39

i i

i i
i i

minenna 2011/9/5 13:53 page 40 #72


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 2.4 Partition of the risk-neutral density of the non-equity product


with respect to the point of zero return and two fixed positive thresholds
to be identified

1 2
15,000

10,000

5,000

0
0 50 100 150 200 250 300 350 400

Final value lower than the issue price


Final value lower than that of the risk-free asset
Final value in line with that of the risk-free asset
Final value higher than that of the risk-free asset

A good compromise between the level of information detail and


the need for a synthetic representation is to subdivide the macro-
event higher than the issue price into three alternative scenarios,
highlighting the behaviour of the products density when the results
of the investment are respectively lower than, higher than or in line
with those of the risk-neutral numraire. This can be done by simply
identifying two different reference thresholds (1 , 2 ) on the x-axis,
with 0 < 1 < 2 (see Figure 2.4).
At this point, the problem that arises is connected with the correct
selection of these reference thresholds that need to be unambigu-
ously defined and not subject to arbitrary choices. An elegant and
direct solution to this issue exploits the density of the risk-free asset
(the so-called superimposition technique).
Since both the latter density and that of the non-equity product are
calculated under the risk-neutral measure Q, it is legitimate to com-
pare their final values and to properly define events to be quantified
using this measure.
The probability density function of the risk-neutral numraire at
time T inherently associates a predetermined quantile of probability

40

i i

i i
i i

minenna 2011/9/5 13:53 page 41 #73


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.5 Identification of the reference thresholds 1 and 2 on the


density of the risk-free asset

1 2
2.0
1.8
Risk-free asset
1.6
1.4
1.2
104

1.0
0.8
0.6
0.4
0.2
0
0 50 100 150 200 250 300 350 400

with a final value of this asset; in the search of a coherent set of


reference thresholds, it is then possible to connect two values of 1
and 2 on the x-axis with the events the final value of the risk-free
asset is lower than j (for j = 1, 2), in terms of probability quantiles
on the final distribution of the numraire. This is formally stated as
follows
Q[100BT  j ] = Pj for j = 1, 2 (2.16)
where, as usual, T denotes the time horizon of the non-equity
product.
Figure 2.5 shows the identification of the reference thresholds on
the probability distribution of the risk-neutral numraire at time T.
The choice of specific quantiles Pj (for j = 1, 2) on the density
of the risk-free asset to characterise the reference thresholds of the
products density implicitly assumes a cut of a fixed percentage of
the trajectories of the process of the risk-free asset that are inevitably
considered as not representative of the potential behaviour of the
process itself at time T. Hence, the cutting procedure can be con-
sidered as a sort of correction aimed at excluding extreme events
from the probability distribution of the risk-neutral numraire and,
to this end, the choice of the thresholds 1 and 2 must be related
respectively to very low and very high quantiles. Moreover, in
a broader sense, it is self-evident that if the original distribution
of the risk-neutral numraire is defined in an open interval like

41

i i

i i
i i

minenna 2011/9/5 13:53 page 42 #74


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 2.6 Partition of the risk-neutral density of the non-equity product


with respect to the point of zero return and to the two fixed positive
thresholds identified with the superimposition technique

1 2
2.0
1.8
1.6
1.4
1.2
104

1.0
0.8
0.6
0.4
0.2
0
0 50 100 150 200 250 300 350 400

Risk-free asset
Final value lower than the issue price
Final value lower than that of the risk-free asset
Final value in line with that of the risk-free asset
Final value higher than that of the risk-free asset

[0, [, cuts on given quantiles are compulsory in order to effectively


implement a criterion connected with the density of the risk-free
asset.
In this work the values of 1 and 2 are set, respectively, to the
2.5% and 97.5% percentiles,8 and hence, Equation 2.16 becomes

Q[100BT  1 ] = 2.5%
(2.17)
Q[100BT  2 ] = 97.5%

It is interesting to observe that, by using this identification crite-


rion, the reference thresholds are automatically anchored to varia-
tions in the position and the dispersion of the density of the risk-
neutral numraire, and consequently these thresholds objectively
reflect changes in the volatility of the interest rates and in the overall
market conditions.
Figure 2.6 offers a graphical synthesis of the overall technique of
superimposition with the density of the risk-free asset.

42

i i

i i
i i

minenna 2011/9/5 13:53 page 43 #75


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

The four macro-events considered by the table of probabilistic


scenarios can be defined as:

the final value of the investment is lower than the issue price
(for short: lower than the issue price);
the final value of the investment is higher than the issue price
but lower than that of the risk-free asset (for short: lower than
the final value of the risk-free asset);
the final value of the investment is higher than the issue price
and in line with that of the risk-free asset (for short: in line with
the final value of the risk-free asset);
the final value of the investment is higher than the issue price
and higher than that of the risk-free asset (for short: higher
than the final value of the risk-free asset).

Formally, the probabilities associated with the four macro-events


have to be determined respectively as

Q(ST < 100)




Q(100  ST < 1 )
(2.18)
Q(1  ST < 2 )




Q(ST  2 )

where the values of 1 and 2 are determined according to Equa-


tion 2.17.
From an operational point of view, the probabilities expressed in
Equation 2.18 can easily be obtained by processing the probability
density of ST determined as indicated in Section 2.1.
Table 2.1 summarises the four macro-events described above and
their corresponding risk-neutral probabilities determined according
to Equation 2.18.
By using the layout of Table 2.1, the probabilistic scenarios
corresponding to Figure 2.6 are reported in Table 2.2.
It can immediately be seen that this tool is potentially able (albeit
in this very standard case) to synthesise in four simple numbers the
key features of the entire probability distribution of the non-equity
product.
Table 2.2 extracts important information about the statistical dis-
persion of the final values of both the non-equity product and the

43

i i

i i
i i

minenna 2011/9/5 13:53 page 44 #76


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Table 2.1 Layout of the table of probabilistic scenarios

Probabilities
Scenarios (%)

The final value of the investment is


lower than the issue price
The final value of the investment is
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is
higher than the issue price
and in line with that of the risk-free asset
The final value of the investment is
higher than the issue price
and higher than that of the risk-free asset

Table 2.2 Table of probabilistic scenarios for the product of Figure 2.6

Probabilities
Scenarios (%)

The final value of the investment is 34.9


lower than the issue price
The final value of the investment is 8.4
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is 52.5
higher than the issue price
and in line with that of the risk-free asset
The final value of the investment is 4.2
higher than the issue price
and higher than that of the risk-free asset

risk-free asset. In fact, it is the variability of the density of the risk-


free asset that identifies the reference thresholds to make the super-
imposition and, hence, to measure the probabilities of the different
macro-events. It follows that the greater the variance of the risk-
neutral numraire, the greater the distance between the values of
1 and 2 . In this perspective, momentarily forgetting the scenario
lower than the issue price, the probabilities associated with the sce-
narios lower than the final value of the risk-free asset and higher

44

i i

i i
i i

minenna 2011/9/5 13:53 page 45 #77


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

than the final value of the risk-free asset provide a first indication
of the dispersion of the non-equity product at time T.
However this indication cannot be deemed exhaustive for two
reasons.
Firstly, the products density is partially affected by the same
source of variability as the risk-free asset. This comes from the fact
that, by working under the risk-neutral measure, the volatility of
the interest rates also affects the behaviour of the stochastic process
of the non-equity product. Such influence will be more or less rel-
evant depending on the specific features of the product considered
(eg, a plain-vanilla bond will typically undergo this influence more
than a certificate whose underlying asset is a share or a basket of
shares). But in any case, because of the risk-neutrality, it remains an
implicit correlation between the two densities involved in the super-
imposition, and this correlation entails that the probabilities of the
macro-events lower than the final value of the risk-free asset and
higher than the final value of the risk-free asset are not sufficient to
obtain a proper disclosure of the variability of the final performances
of the non-equity investment.
Secondly, a table that contains only the probabilities of the four
scenarios is necessarily blind to the way in which elementary events
are aggregated within each single macro-event of the partition. In
other words, the four probabilities alone would not provide an effec-
tive synthesis of the specific probability density of a non-equity prod-
uct because they ignore the behaviour of its final values over each
of the four intervals identified on the x-axis by the fixed reference
thresholds.
To better understand this point, Figure 2.7 shows the density of
the same product considered so far (old), which has a fair value
of 88.87, together with the density of another non-equity product
(new), whose fair value is 85.03.
From the figure it is easy to see that the riskreturn profiles of
the two products are quite different. Nevertheless, applying the ref-
erence thresholds (ie, the point of zero return, 1 and 2 ) these
two investment alternatives display the same probabilities for each
scenario.
This example highlights the fact that, in order to provide a suit-
able synthesis of the performance risk of a non-equity product and
to ensure a fair comparison across the different products available

45

i i

i i
i i

minenna 2011/9/5 13:53 page 46 #78


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 2.7 Comparison of the densities of two non-equity products with


the same probabilities

1 2
2.0
New product
Old product
1.5
104

1.0

0.5

0
0 50 100 150 200 250 300 350 400

on the market, the information set contained in the table of proba-


bilistic scenarios cannot be confined to the sole indication of the four
probabilities.
An easy solution to this problem is therefore to complete the table
with an additional column of data concerning the level of the poten-
tial performances that the product can achieve within each single
scenario. This means that the additional column must contain a syn-
thetic indicator of the possible values of the random variable ST
conditioned upon the occurrence of each corresponding scenario.
Possible candidates are the conditional mean or median or mode
of ST .
Given that the reduction in granularity achieved with the super-
imposition technique comes at the cost of failing to catch some local
irregularities in the density of the product, among the candidates
mentioned above, the conditional mean (or conditional expected
value) seems the most appropriate to recover, albeit only in part,
this loss of information. In fact, by definition, the median and the
mode are synthetic values aimed at capturing a specific point of a
probability distribution. On the other hand, even with the limitations
identified in Section 2.3.2, the mean (or expected value) is an indi-
cator that, for its statistical properties, has the advantage of taking
into account all possible values of a random variable and of weight-
ing these values by the their relative frequencies. This property of

46

i i

i i
i i

minenna 2011/9/5 13:53 page 47 #79


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Table 2.3 Layout of the table of probabilistic scenarios with the


conditional means

Mean
Probabilities values
Scenarios (%) ()

The final value of the investment is


lower than the issue price
The final value of the investment is
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is
higher than the issue price
and in line with that of the risk-free asset
The final value of the investment is
higher than the issue price
and higher than that of the risk-free asset

the mean is obviously inherited by the conditional mean. In formal


terms, the conditional means for the four macro-events considered
in the probability table are determined respectively as follows
 100
Q 1

E (ST | ST < 100) = x fST (x) dx

Q(ST < 100)



 1



Q 1

E (ST | 100  ST < 1 ) = x fST (x) dx


Q(100  ST < 1 ) 100
 2
1

EQ (ST | 1  ST < 2 ) = x fST (x) dx


Q(1  ST < 2 ) 1



 +



Q 1

E (ST | ST  2 ) = x fST (x) dx
Q(ST  2 ) 2
(2.19)
where fST () denotes the risk-neutral density of the final values of
the non-equity product and the values of 1 and 2 are determined
according to Equation 2.17.9
From an operational point of view, the conditional means ex-
pressed in Equation 2.19 can also easily be obtained by processing
the probability density of ST determined as indicated in Section 2.1.
At the end of these quantitative calculations, the table of probabilis-
tic scenarios can be filled by using the standard template shown in
Table 2.3.

47

i i

i i
i i

minenna 2011/9/5 13:53 page 48 #80


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Table 2.4 Table of probabilistic scenarios with the conditional means for
the old product

Mean
Probabilities values
Scenarios (%) ()

The final value of the investment is 34.9 91.1


lower than the issue price
The final value of the investment is 8.4 101.7
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is 52.5 117.3
higher than the issue price
and in line with that of the risk-free asset
The final value of the investment is 4.2 174.6
higher than the issue price
and higher than that of the risk-free asset

Table 2.5 Table of probabilistic scenarios with the conditional means for
the new product

Mean
Probabilities values
Scenarios (%) ()

The final value of the investment is 34.9 83.5


lower than the issue price
The final value of the investment is 8.4 101.8
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is 52.5 110.4
higher than the issue price
and in line with that of the risk-free asset
The final value of the investment is 4.2 215.2
higher than the issue price
and higher than that of the risk-free asset

By coupling for each scenario its risk-neutral probability and the


associated mean value, the table is a useful tool for disclosing the
riskiness of the potential performances of any non-equity return-
target product and to guarantee a fair comparison across various
products.

48

i i

i i
i i

minenna 2011/9/5 13:53 page 49 #81


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

For instance, coming back to the two products whose densities are
shown in Figure 2.7, the corresponding probability tables are given
respectively in Tables 2.4 and 2.5.
This example clearly explains that the further information on the
mean values gives an interesting insight into the peculiarities of each
of the two products that cannot be appreciated otherwise. In fact,
the conditional mean values of the first product outperform those
of the second one in three out of four scenarios; this indication and
the limited weight (only 4.2% of probability) of the fourth scenario
(where the second product is winning) allow for an immediate and
meaningful comparison of the two investment alternatives.
It is worth stressing that the data contained in the table of proba-
bilistic scenarios have a strong relationship with the fair value of the
non-equity product reported inside the financial investment table.
In fact, the fair value being merely the discounted expected value
of the risk-neutral density of ST , it follows that by discounting back
(along the deterministic interest rates curve observed on the market)
to the issue date the mean of each scenario and then averaging these
discounted values by using the probabilities in the table, the result
is a rough approximation of the fair value of the non-equity prod-
uct. Hence, if such a proxy is too different from the fair value given
inside the financial investment table, this gap signals that there is an
inconsistency in the overall information provided by the first pillar
which deserves to be carefully investigated.10
This point emerges by considering again the two products in Fig-
ure 2.7. In fact, the fair value of the first product is 88.87 which is
very close to its proxy (ie, 88.83) obtained on the basis of the data
displayed in Table 2.4, while the fair value of the second product is
85.03 and also this value is very close to the proxy (85.12) obtained
from its probabilistic scenarios illustrated in Table 2.5.
In order to complete the explanation of the methodology under-
lying the probability table it is necessary to consider the choice of
partitioning the density of a product through the superimposition
technique, ie, with reference to fixed thresholds identified on the x-
axis. As mentioned in the previous section, this technique does not
make a pointwise comparison between the final values of the prod-
uct and those of the risk-free asset. Rather, it tells how frequently
the non-equity product performs in a range that is compatible with
the results of the risk-neutral numraire (scenario in line with the

49

i i

i i
i i

minenna 2011/9/5 13:53 page 50 #82


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

final value of the risk-free asset), and how frequently the outcomes
of the non-equity investment fall in a range of values that are either
more or less appealing than those of the said numraire (scenarios
higher than the final value of the risk-free asset and lower than
the final value of the risk-free asset respectively).
In order to better highlight this point, it is useful to consider the
macro-event in line with the final value of the risk-free asset. By
applying the methodology described so far it could well happen
that the non-equity product would have most of its final values very
close to the lower threshold 1 , implying that in most of the cases
the risk-free asset would perform better than the product, but by
definition this interesting information would not be captured by the
probability table as it would report an unchanged probability for the
mentioned macro-event.
The example shows that the methodology adopted to build the
table can have a weakness in providing a proper comparison with
the risk-free asset, especially if the dispersions of the two target prob-
ability densities are comparable in magnitude. This condition can be
satisfied when the non-equity product is characterised by low intrin-
sic volatility and high correlation with the interest-rate volatility, as
in the cases of monetary funds, fixed- and floating-coupon bonds. In
fact, in these cases, by construction the non-equity product would
be designed to perform in a range of values that is comparable to
the possible results of an investment in the risk-free asset, and so the
procedure of superimposition would tend to produce results with
zero or little probability in the extreme scenarios and a very high
probability mass in the scenario in line with the final value of the
risk-free asset, and the mean value alone would not be enough to
allow an honest comparison with the risk-free alternative.
The same effect of loss of significance may be observed for very
long time horizons (such as in the case of long-dated bonds and
products with similar levels of risk and duration) as the volatility of
the instantaneous risk-free rate cumulates over time and the macro-
event in line with the final value of the risk-free asset tends to
incorporate most of the final outcomes of the non-equity product.
Figure 2.8 illustrates an example where a five-year non-equity
product has a density characterised by a degree of dispersion similar
to the one of the risk-neutral numraire.
The corresponding probability table is given in Table 2.6.

50

i i

i i
i i

minenna 2011/9/5 13:53 page 51 #83


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.8 Comparison of the densities of the risk-free asset with a


non-equity product having a significant probability associated with the
scenario in line

1 2
6

4
104

0
60 70 80 90 100 110 120 130 140 150 160

Density of the product


Density of the risk-free asset

Table 2.6 Table of probabilistic scenarios for the product of Figure 2.8
with the conditional means

Mean
Probabilities values
Scenarios (%) ()

The final value of the investment is 3.5 89.8


lower than the issue price
The final value of the investment is 3.1 101.4
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is 92.7 105.5
higher than the issue price
and in line with that of the risk-free asset
The final value of the investment is 0.7 135.0
higher than the issue price
and higher than that of the risk-free asset

In these situations the superimposition technique conveys very


little useful information to the investor, since the valuable informa-
tion in a low dispersion environment is mostly connected with the
relative performance with respect to the risk-free asset.

51

i i

i i
i i

minenna 2011/9/5 13:53 page 52 #84


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Table 2.7 Layout of the table of probabilistic scenarios with the


conditional means and the (unconditional) mean of the risk-free asset

Mean values
Probabilities (risk-free
Scenarios (%) mean) ()

The final value of the investment is


lower than the issue price
The final value of the investment is
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is
higher than the issue price ()
and in line with that of the risk-free asset
The final value of the investment is
higher than the issue price
and higher than that of the risk-free asset

In order to overcome this issue, a simple solution is to supplement


the last column of the probability table with the mean value of the
density of the risk-free asset to be inserted in correspondence of the
third scenario, as shown by Table 2.7.
This additional information item is very useful to retail investors
for understanding the effective positioning of the non-equity prod-
uct with respect to the risk-free asset. For instance, considering again
the five-year product of Figure 2.8, its probability table becomes the
one represented in Table 2.8.
In this example, due to the information about the (unconditional)
mean value of the density of the risk-free asset, investors can imme-
diately grasp that, even if, in 92.7% of cases, the final values of the
non-equity product lie in a range that is compatible with that of
the risk-neutral numraire, the product is performing, on average,
considerably worse than the risk-free asset.
The integration of the table with this further piece of informa-
tion becomes more valuable as the probability mass of the product
that is in line with the density of the risk-free asset increases, and
as the gap between the performances of the product and those of its
numraire widens, in relation to this scenario. It follows that when
the probability of this scenario is negligible or when, again in this

52

i i

i i
i i

minenna 2011/9/5 13:53 page 53 #85


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Table 2.8 Table of probabilistic scenarios for the product of Figure 2.8
with the conditional means and the (unconditional) mean of the risk-free
asset

Mean values
Probabilities (risk-free
Scenarios (%) mean) ()

The final value of the investment is 3.5 89.8


lower than the issue price
The final value of the investment is 3.1 101.4
higher than the issue price
but lower than that of the risk-free asset
The final value of the investment is 92.7 105.5
higher than the issue price (114.1)
and in line with that of the risk-free asset
The final value of the investment is 0.7 135.0
higher than the issue price
and higher than that of the risk-free asset

scenario, there is a substantial overlap between the two target den-


sities, meaning that the conditional mean of the product is close to
the (unconditional) mean of the risk-free asset, the probability table
can be set up by using the layout shown in Table 2.3 without any
loss of valuable information for the investor.
Clearly, in the degenerate case where, due to very specific choices
of financial engineering, the probability density of the product is
equal in distribution to that of an elementary product (or of the risk-
free asset itself), the use of the probability table is not necessary to
illustrate the performance risk of the investment. In fact, in these
extreme situations the product, although structured, is fully equiv-
alent to an elementary product and, therefore, the probabilistic sce-
narios can be replaced by deterministic return indicators without
any loss of information (see Section 2.4).

2.3.4 Probabilistic scenarios for non-equity exchange


structures
Often non-elementary return-target products do not represent a
direct and stand-alone financial investment, but rather they inter-
vene to modify existing non-equity products or even to replace them.
From a technical point of view, these situations can be assimilated
to a synthetic swap between the two products: by entering into the

53

i i

i i
i i

minenna 2011/9/5 13:53 page 54 #86


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

new product the investor takes a long position on it and, at the same
time, by abandoning the old product, in a certain sense, they are
taking a short position on the original investment.
A typical case of these non-equity exchange structures is found in
connection with exchange public offerings, where an issuer proposes
to investors the substitution of a new product for a product bought
at a given past date, but sometimes such situations can also arise to
satisfy an autonomous decision of an investor willing to replace a
specific product held in their portfolio of financial assets.
In general terms, it is straightforward to consider that when the
offer of an illiquid non-equity product takes place in a context that
already includes a well-identified financial alternative, this alter-
native is a much better candidate than the risk-free asset to provide
investors with a proper disclosure of the performance risk associated
with the new proposed product.11 In fact, in the aforementioned sit-
uations it is natural that a complete assessment of the ability of the
new product to create added value for investors necessarily requires
a comparison with the final payoffs achievable in the case where the
investor would decide not to switch to the new product and continue
to hold the old one.
The existence of a concrete investment alternative (ie, maintain-
ing the old product) not only rules out the need for a probabilis-
tic comparison with the risk-neutral numraire, but it also explains
the abandonment of the superimposition technique in favour of a
methodology apt to make a direct comparison between the final
values of the new product and of the old one.
As mentioned in Sections 2.3.2 and 2.3.3, such a methodology is
the one which makes use of the trajectory-by-trajectory technique
to perform a pointwise comparison between the densities of two
alternative financial investments.
To understand the rationale behind this technique, it is first of
all worth recalling that, as mentioned in Section 2.1, both densi-
ties can be obtained by properly modelling and simulating via stan-
dard Monte Carlo approaches each product until the end of the rec-
ommended time horizon.12 Moreover, the use of the risk-neutral
measure Q implies that each trajectory of one of the two products
is unequivocally associated with a given trajectory of the instant-
aneous risk-free rate. The same obviously also holds for the other
product. Hence, by trivially applying the transition property, all the

54

i i

i i
i i

minenna 2011/9/5 13:53 page 55 #87


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.9 Densities of two products involved in an exchange


hypothesis: old product versus product new1

7
6 Product new1
Old product
5
4
104

3
2
1
0
40 50 60 70 80 90 100 110 120 130

Figure 2.10 Density of the differences between the product new1 and
the old product

12

11

4
104

0
25 20 15 10 5 0 5 10

trajectories of the two products will also be unequivocally associated


two by two.
By exploiting this unambiguous relation, it is easy to determine
the risk-neutral density of the differences between the final value of
the new non-equity product and the corresponding final value of
the old one.
new,i old,i
Formally, denoting respectively by ST and ST the final val-
ues of the two alternative products associated with the generic ith
trajectory out of m (m N), their difference DiT is defined as
new,i old,i
DiT = ST ST , i = 1, 2, . . . , m (2.20)
The full set of the differences determined according to Equa-
tion 2.20, denoted by {DiT }m
i=1 , defines the random variable DT which

55

i i

i i
i i

minenna 2011/9/5 13:53 page 56 #88


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

takes values on R and whose risk-neutral density is therefore imme-


diately obtained starting from the densities of the two products to
be compared.
At this point it is therefore possible to define mutually exclu-
sive macro-events and partition consistently the risk-neutral density
of DT .
For the purposes of risk transparency, the most reasonable choice
seems to split the density into the following two macro-events:
1. the macro-event A : {DT < 0}, which comprises all the states of
the world where the new non-equity product performs worse
than the old investment;
2. the complementary macro-event B : {DT  0}, for which the
opposite holds (ie, the new non-equity product performs better
than or like the old investment).
Figure 2.9 depicts the risk-neutral density of two non-equity prod-
ucts called old and new1 respectively, while Figure 2.10 shows
the corresponding risk-neutral density of DT determined through
Equation 2.20.
The two macro-events A and B can be defined as
the final value of the new product is lower than that of the
old product (for short: new is lower than old),
the final value of the new product is higher than that of the
old product (for short: new is higher than old),
and their corresponding probabilities are respectively given by

Q(DT < 0)
(2.21)
Q(DT  0)
From an operational point of view, the probabilities expressed in
Equation 2.21 can easily be obtained by processing the probability
density of DT . In fact, the probability of the first macro-event can be
calculated by counting how many times (out of m) DT is negative
and then dividing by m the result of this calculation. With regard to
the second macro-event, its probability is the complement to 1 of the
probability of the first macro-event.
Table 2.9 summarises the two macro-events described above and
their corresponding risk-neutral probabilities determined according
to Equation 2.21.

56

i i

i i
i i

minenna 2011/9/5 13:53 page 57 #89


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Table 2.9 Layout of the table of probabilistic scenarios for non-equity


exchange products

Probabilities
Scenarios (%)

The final value of the new product is


lower than that of the old product
The final value of the new product is
higher than that of the old product

Table 2.10 Table of probabilistic scenarios for the exchange hypothesis


corresponding to Figure 2.10

Probabilities
Scenarios (%)

The final value of the new product is 36.3


lower than that of the old product
The final value of the new product is 63.7
higher than that of the old product

The table of probabilistic scenarios (Table 2.9) is a powerful device


in order to give investors a first important indication about the rela-
tive performances (and, hence, also the relative riskiness) of the new
proposed product with respect to the investment they are holding.
By using the above layout, Table 2.10 reports the probabilistic
scenarios corresponding to Figure 2.10.
From Table 2.10, the investor immediately grasps that by replacing
their non-equity product with the new one, the results that they
could achieve at the end of the recommended time horizon would
improve in 63.7% of cases.
However, the only information provided by the probabilities of
the two scenarios described above is not sufficient to effectively sup-
port the investor in deciding whether or not to replace the old with
the new proposed financial investment, as it does not consider the
behaviour of the probability density of DT inside each single macro-
event. In other words, the information set provided by Table 2.9
gives no indication of the extent of the under-performances of the
new product in the first scenario, or any indication about the level
of its over-performances in the second scenario.

57

i i

i i
i i

minenna 2011/9/5 13:53 page 58 #90


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 2.11 Densities of two products involved in an exchange


hypothesis: old product versus product new2

15.0
Product new2
Old product
7.5

104
5.0

2.5

0
20 40 60 80 100 120 140

Figure 2.12 Density of the differences between the product new2 and
the old product

3.0

2.5

2.0
104

1.5

1.0

0.5

0
20 15 10 5 0 5 10 15 20

To better understand this point, Figure 2.11 shows the densities


of two products involved in a possible exchange. The old prod-
uct is the same as in Figure 2.9, while the new product (called
new2) represents an alternative candidate to the one considered
in Figure 2.9.
The risk-neutral density of the trajectory-by-trajectory differences
associated with Figure 2.11 is given in Figure 2.12.
Despite the very different shapes of the densities of the prod-
ucts new1 and new2, and the consequent huge diversity in
the distributions of the differences (compare Figures 2.10 and 2.12),
the exchange hypothesis referring to the product new2 has the

58

i i

i i
i i

minenna 2011/9/5 13:53 page 59 #91


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

same table of probabilistic scenarios as the hypothesis concerning


the product new1.
In a similar situation an investor having at their disposal only the
probability data could simply guess that both exchange proposals
seem affordable but they would be indifferent in choosing between
one or the other.
This example illustrates very well the need to integrate the infor-
mation on the probabilities with further synthetic indicators that
could help to provide a better and more complete disclosure of every
single hypothesis of exchange concerning a given non-equity prod-
uct and to ensure, at the same time, an objective comparison between
different hypotheses all referring to the same old product.
Reasoning by analogy to Section 2.3.3, the probability table should
be therefore endowed with an additional column that, for each of
the two considered scenarios, shows synthetic indicators related to
the difference between the two products at stake.
Clearly, such synthetic indicators must be easy to understand and
meaningful for the average investor taking into account the partic-
ular context that qualifies any exchange hypothesis. In fact, in these
cases the investor is not called to take an unconditional investment
decision. On the contrary, being the holder of a pre-existing non-
equity product, their assessment of the new product is necessarily
carried out in relative terms, meaning that they have to consider the
changes implied by the new investment with respect to the alterna-
tive of opting for a buy-and-hold strategy on the original product
and, hence, rejecting the new one.
Consequently, the information provided to the investor must also
be set up according to a comparative logic. This logic (which, recall-
ing the two considered scenarios, already characterises the table with
only the probabilities) can easily be extended to the additional indi-
cators concerning the final results of the new product. In this perspec-
tive a good relative indicator is the conditional mean of the random
variable DT in correspondence of each of the two above scenarios.
This indicator indeed has the advantage of illustrating (albeit in a
concise and simplified way) the final values of the new product in
differential terms with respect to the original one.
Formally, denoting by fDT () the risk-neutral density of DT , the
conditional means (or conditional expected values) of the macro-
events new is lower than old and new is higher than old are

59

i i

i i
i i

minenna 2011/9/5 13:53 page 60 #92


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Table 2.11 Layout of the table of probabilistic scenarios with conditional


means for non-equity exchange products

Probabilities Mean values


Scenarios (%) ()

The final value of the new product is


lower than that of the old product
The final value of the new product is
higher than that of the old product

defined respectively as
0
1
Q
E (DT | DT < 0) = x fDT (x) dx



Q(DT < 0)
 +
(2.22)
1

EQ (DT | DT  0) = x fDT (x) dx
Q(DT  0) 0
From an operational point of view, the conditional means expressed
in Equation 2.22 can also be obtained easily by processing the proba-
bility density of DT . At the end of these quantitative calculations, the
table of probabilistic scenarios can be filled by using the standard
template shown in Table 2.11.
For each scenario, the indication of the pair probabilitycon-
ditional mean provides essential information for investors to under-
stand whether the new product presents a more appealing risk
return profile than the old one. Furthermore, through this informa-
tion set, the investor is effectively supported in the selection between
alternative exchange hypotheses.
Taking the two cases considered in Figures 2.9 and 2.11, respec-
tively, the insertion of the data about the conditional means helps the
identification of the more efficient solution. In fact, by comparing the
probability tables of the two alternatives (see Tables 2.12 and 2.13),
it is evident that the second exchange hypothesis outperforms the
first one.
Another key property of the table of probabilistic scenarios set up
according to the layout exposed in Table 2.11 is its strong connec-
tion with the initial theoretical value of the synthetic swap associ-
ated with any given non-equity product proposed for an exchange.
Indeed, a good proxy of such theoretical value can be obtained by
discounting back to the exchange date the conditional mean of each

60

i i

i i
i i

minenna 2011/9/5 13:53 page 61 #93


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Table 2.12 Table of probabilistic scenarios with conditional means for


the exchange of the old product with the product new1

Probabilities Mean values


Scenarios (%) ()

The final value of the new product is 36.3 13.1


lower than that of the old product
The final value of the new product is 63.7 7
higher than that of the old product

Table 2.13 Table of probabilistic scenarios with conditional means for


the exchange of the old product with the product new2

Probabilities Mean values


Scenarios (%) ()

The final value of the new product is 36.3 10.3


lower than that of the old product
The final value of the new product is 63.7 16
higher than that of the old product

scenario and then averaging these discounted values according to


the probabilities displayed in the table. If the number resulting from
these calculations is positive (negative), it means that, on average,
the new product will create (destroy) value for the investor with
respect to the old product.13
Applying this rule of thumb to the two alternatives summarised in
Tables 2.12 and 2.13, respectively, gives an approximate theoretical
value of 0.22 for the synthetic swap implicit in the first alterna-
tive and a theoretical value of around 5.71 for the synthetic swap
identified by the second exchange hypothesis, hence confirming the
message conveyed by the corresponding probability tables.
It is important to observe that the table of probabilistic scenar-
ios considered so far, by splitting the risk-neutral density of DT into
only two events, realises a significant reduction in granularity with
respect to the total amount of information embedded in the entire
density of this random variable. On the one hand, this reduction in
granularity is helpful to provide investors with a clear and concise
snapshot of the performance risk of a given non-equity exchange
structure. On the other hand, it obviously leads to the loss of some

61

i i

i i
i i

minenna 2011/9/5 13:53 page 62 #94


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 2.13 Densities of two products involved in an exchange


hypothesis: old product versus product new3

9
8 Product new3
Old product
7
104 6
5
4
3
2
1
0
20 40 60 80 100 120 140 160

information related to the specific shape of the probability distribu-


tion of the differences between the final values of the two products
involved in the comparison. This information loss is not relevant
if the data inside the probability table are aligned in providing an
unambiguous indication regarding a specific exchange hypothesis.
However, it must be considered that in some cases this information
is not so relevant for effectively helping investors in making their
decisions.
A particularly emblematic case occurs when, for the exchange of
a given old product, there are two or more competing products that
are very similar or even identical both in terms of probabilities and
conditional means even though the corresponding densities are com-
pletely different. For instance, coming back to the examples seen so
far, it is perfectly possible that an investor is facing two alternative
exchange structures that seem absolutely equivalent. The first alter-
native is the one depicted in Figure 2.9 (ie, old product versus the
product new1), while the second alternative, which is represented
in Figure 2.13, proposes to substitute the same old product with a
product, new3, which is different from both the product new1 and
the product new2.
The risk-neutral density of the trajectory-by-trajectory differences
associated with Figure 2.13 is given in Figure 2.14.
The visual comparison between the distributions of the differ-
ences (see Figures 2.10 and 2.14) shows the greater variability and
the strong leptokurtosis which characterise the latter density (ie,

62

i i

i i
i i

minenna 2011/9/5 13:53 page 63 #95


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.14 Density of the differences between the product new3 and
the old product

11.0

10.5

2.0
104

1.5

1.0

0.5

0
50 40 30 20 10 0 10 20 30

based on the product new3). However, despite this heterogeneity,


the probability table associated with the product new3 is identical
to that referring to the product new1. Consequently, in this case
the probability table misses its target of conveying concretely useful
information to investors for their investment choices.
More generally, when the density of DT is very irregular (eg, the
presence of fat tails, positive or negative skewness, high volatility),
knowledge of the probabilities and the conditional means of the
two scenarios might not be enough to make a proper comparison
between the old and the new product as well as between different
exchange possibilities.
In all these situations it is reasonable to attempt to overcome the
lack of more detailed information and also the possible decisional
stalemate of investors by enriching the basic dataset contained in the
probability table with some additional indicators. These indicators
should be able to extract relevant information about the peculiar
shape of the density of the differences and to highlight what could
happen in extreme states of the world, hence answering the unsolved
questions of the investors and bringing them towards a definitive
and enlightened decision.
As is well known in the literature, typical statistical metrics used to
investigate the behaviour of the tails of a probability distribution are
determined by identifying on the x-axis the quantile which cumu-
lates a fixed probability mass. Then, the metric of interest can be the
quantile itself or the expected value of the density conditioned to

63

i i

i i
i i

minenna 2011/9/5 13:53 page 64 #96


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Table 2.14 Layout of the conditional values on the tails for a non-equity
exchange product

Expected loss from the exchange


under extreme unfavourable conditions
Expected gain from the exchange
under extreme favourable conditions

the region of the x-axis which ends at that quantile (left tail) or starts
at it (right tail). The latter indicator seems preferable since, unlike
the sole quantile, it is sensitive to the specific shape of the tail, and
hence conveys more valuable information to investors.
The quantile associated with the probability mass P1 cumulated
on the left tail of the density of DT is denoted by 1 , and the quantile
associated with the probability mass (1 P2 ) cumulated on the right
tail of the same density is denoted by 2 , ie

Q[DT  1 ] = P1
(2.23)
Q[DT  2 ] = 1 P2
In this work, P1 is set 2.5% and P2 is equal to (1 P1 ).14 Recalling
that fDT () denotes the risk-neutral density of DT , the sought condi-
tional expected values (so-called conditional values on the tails) are
defined as15
 1
1
EQ [DT | DT  1 ] = x fDT (x) dx


2.5%
 + (2.24)
1

Q
E [DT | DT  2 ] = x fDT (x) dx

2.5% 2
Like the conditional means of the two scenarios of the probability
table, the conditional values on the tails expressed in Equation 2.24
can also easily be obtained by simple calculations based on the sim-
ulated probability density of DT . After these calculations, the two
additional indicators can be represented using the layout shown in
Table 2.14.
Evaluating the conditional values on the tails for the densities
shown in Figures 2.10 and 2.14, respectively, yields the numbers
reported in Tables 2.15 and 2.16.
By comparing these two tables any investor can easily understand
that, probabilities and conditional means being equal, by exchanging

64

i i

i i
i i

minenna 2011/9/5 13:53 page 65 #97


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Table 2.15 Conditional values on the tails for the exchange hypothesis
old versus new1

Expected loss from the exchange 22.8


under extreme unfavourable conditions
Expected gain from the exchange 7.5
under extreme favourable conditions

Table 2.16 Conditional values on the tails for the exchange hypothesis
old versus new3

Expected loss from the exchange 40.4


under extreme unfavourable conditions
Expected gain from the exchange 23.4
under extreme favourable conditions

the old product with the product new1 they are moving to a position
which reduces the overall riskiness of the performances achievable
at the end of the time horizon more than if they were to opt for
swapping their original investment with the product new3.
The methodology exposed in this section can also be extended
with minor adaptations to situations where the non-equity financial
structure is embedded into a financial liability (such as a mortgage)
held by a retail subject, like small or medium firms or municipali-
ties and other similar entities and it is aimed at mitigating the risks
and reducing the costs of that liability.16 The persistent validity and
usefulness of this approach are ensured by simply considering that,
except for the sign of the cashflows (which is in fact negative), struc-
tured liabilities feature a financial engineering that mirrors that of
non-equity investment products, and, exactly like investment prod-
ucts, they typically lead to a specific risk exposure or to the modifi-
cation of an outstanding exposure, also through the embedding of
derivative-like components.
One possible example of interest could be that of a retail individ-
ual who had previously signed a fixed-rate mortgage, and in the face
of a downward movement in the interest rate curve is now looking
for a financial contract (most likely an interest rate swap) that allows

65

i i

i i
i i

minenna 2011/9/5 13:53 page 66 #98


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

them to benefit from the new market conditions by switching from


fixed to floating rates. In this example the original mortgage rep-
resents the old liability, while the combination of the old liability
with a specific interest rate swap proposed by a given counterpart
identifies the new structured liability. Then, in order to understand
whether or not the swap proposed is suitable for their needs, this
individual could usefully refer to a probability table built accord-
ing to the trajectory-by-trajectory technique like that of Table 2.11,
eventually supplemented by information on the conditional values
on the tails.
Finally, it is worth noting the feasibility of additionally extending
the trajectory-by-trajectory technique to non-equity products that
represent a completely new and independent investment for the
potential subscribers.
In this regard, it must be noted that, although from a computa-
tional point of view this alternative methodology is quite easy to
implement, it is not, however, able to produce a message as impres-
sive and immediate as the one provided by the probability table
determined in accordance with the superimposition procedure.
More precisely, the trajectory-by-trajectory technique implies
there would be a huge loss of meaningfulness in the last column
of the table (Table 2.3). This is due to the fact that a conditional
mean calculated consistently with this approach would inevitably
mix together final values of the product which are located in very
different regions of the x-axis and whose only common point is the
fact that they have all the same relative position with respect to the
corresponding trajectories of the risk-free asset. As a consequence,
the (conditional) means appearing in the last column of the table
typically would no longer be sorted in ascending order, hence intro-
ducing appreciable complexity which will not aid investors com-
prehension. Moreover this inconvenience cannot be fixed even by
using (as was done in the present section) synthetic relative indi-
cators such as the conditional mean of the differences between the
final value of the product and that of the risk-free asset determined
in correspondence of each single macro-event. In fact, such rela-
tive indicators would restore an ascending order, but their relative
nature and the comparison with the risk-free asset (which is not
included in the portfolio of retail investors) make them very unfa-
miliar to the average customer, who is naturally inclined to assess

66

i i

i i
i i

minenna 2011/9/5 13:53 page 67 #99


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

the performances of a product using an absolute reference, namely


the issue price.

2.4 FIRST PILLAR AND ELEMENTARY PRODUCTS


The probabilistic scenarios are a powerful disclosure tool to improve
the comprehension of the true riskreturn profile of non-equity
investments when facing non-elementary return-target products
with a sophisticated financial engineering or with significant implicit
exposures to specific sources of uncertainty, like credit risk, or both.
In these cases, structuring elements interact with the material risks
of the investment strongly affecting the risk-neutral density of the
product in terms both of possible payoffs and of concentration of the
probability mass in specific regions of returns.
For similar products, the table of probabilistic scenarios is an indis-
pensable informative supplement to the unbundling between fair
value and total costs reported inside the financial investment table,
due to the aforementioned validity on average of figures in this
table. In fact, probabilistic scenarios avoid the risk that investors
carry out their own assessment of profitability by simply projecting
until maturity the fair value reviewed to take account of the time
value of the money but without properly weighing the riskiness of
the product.
However, it has to be considered that, alongside complex invest-
ments, a relevant portion of the primary market of non-equity prod-
ucts for retail investors corresponds to solutions of financial engi-
neering being genuinely simple (with substantially linear payoff
profiles) and usually relatively safe.
Explicit derivatives are absent, cashflow structures are plain
vanilla and the credit risk exposure is negligible. These conditions
immediately imply that, in the universe of return-target products,
only bonds and other debt securities may be classified as elementary
products.
In fact, other non-equity products such as covered warrants, cer-
tificates, index-linked policies, CPPIs and OPBIs are clearly non-
elementary, being either purely stand-alone derivatives or sophisti-
cated investment solutions.
On the other hand, many bonds have very simple cashflow pro-
files (like zero-coupon notes or notes with flat fixed coupons or with

67

i i

i i
i i

minenna 2011/9/5 13:53 page 68 #100


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

floating coupons indexed to risk-free rates, maybe increased by a


spread constant over time) and a limited credit risk.
The risk-neutral density of these products shows just a slight
bimodality, depending on the occurrence (or not) of credit events
relating to the issuer, while most of the probability distributions
usually exhibit a few standard shapes which can be affected more or
less significantly by the volatility surface of the interest rates curve
depending on the presence and the nature of the coupon payments.17
No complications arise from the term structure of the volatilities of
other financial assets or reference values, as happens with securities
containing derivative components, which determine some market
risk exposure or prominent credit risk.
The above arguments suggest that, for elementary products, the
average investors rule of thumb about a somewhat linear relation
across fair value and potential performances works well enough.
Hence, it is legitimate to adopt a simplified information set with
regard to the price unbundling and to the final performances of the
product, provided that the simplification does not entail any reduc-
tion in either the significance or the completeness of the message
conveyed to investors.
In particular, having ruled out explicit derivatives and given the
marginality of the credit derivative embedded in elementary prod-
ucts, it is not necessary to increase the informative detail of the finan-
cial investment table by splitting the fair value into risk-free and the
risky components. Indeed, for unsophisticated structures, the for-
mer component tends to coincide with the overall fair value of the
investment, as also emerges by applying Definition 2.5.
At the same time, as mentioned above, the risk-neutral density of
elementary products presents only a limited leptokurtosis due to a
small left tail reflecting the performance that would be realised in
the case of (quite unlikely) credit events. Most of the distribution
is concentrated in a well-defined region of positive returns with a
degree of dispersion that depends on the characteristics of the bond
concerned (zero-coupon, fixed or floating coupon).
This concentration comes from the event of capital redemption
at maturity (unless amortising features are provided) and on the
coupons paid, if any.
As in all bonds (even the most complex) capital redemption deeply
affects the risk-neutral density because of the so-called Dirac delta

68

i i

i i
i i

minenna 2011/9/5 13:53 page 69 #101


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

effect: a final value equal to the notional capital is shared by all


trajectories which do not experience credit events.
If the product has an elementary engineering, in the aforemen-
tioned meaning of zero-coupon, fixed or floating coupon bonds, this
effect becomes particularly evident.
In addition, when constant fixed coupons (ie, not step-up or step-
down) or floating coupons are also provided, if no default event
occurs, the final values of investment are equal to the notional capital
plus the periodic payments made over the investment time horizon,
including the expiry date. Fixed coupons paid before maturity are
compounded until time T at the stochastic risk-free rate, hence caus-
ing a limited dispersion of the said final values in an area slightly
above the notional capital; while in the case of floating coupons (per-
haps also comprising a constant spread over the risk-free rate) the
dispersion is bigger since the variability of the interest rates curve
counts not only for the compounding of the payments but also for
their size.
Apart from these differences of dispersion, the density functions
of elementary bonds are fairly regular and, consequently, traditional
deterministic return indicators are a good proxy for future perfor-
mances and they can be used as a sound and economic alternative
to the probabilistic scenarios.
Provided that credit risk is negligible, the larger portion of the
probability mass lies on a region of positive payoffs with a relatively
small dispersion due to the predominant Dirac delta effect. It fol-
lows that for elementary bonds the mean value of the risk-neutral
density is strongly informative of the possible payoffs attainable at
maturity. Moreover, deterministic performance indicators based on
rough market data, like the internal rate of return, are very close to
the mean value of the density function, since the implicit assump-
tion of no credit events behind their calculation is close to the real
features of the product.18
Therefore, these deterministic indicators can replace the mean
value of the risk-neutral distribution of the bonds final payoffs with-
out significant loss of information and they can also be compared
with the average risk-free rate expressed by the market over the
same maturity of the bond to assess the fairness of its price and its
profitability. If the internal rate of return is above the risk-free rate
referring to time T, it signals that, in order to compensate investors

69

i i

i i
i i

minenna 2011/9/5 13:53 page 70 #102


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 2.15 Density of a fixed-coupon bond with a negligible credit risk


exposure

16
IRR = 3.06%
14
Average annual return = 2.68%
12

10
104

0
40 50 60 70 80 90 100 110 120

for taking the credit risk of the issuer, the bond is paying a spread: if
this spread is at least in line with the one required by the market, then
the fair value will be no lower than the price. Instead, an internal
rate of return too low in absolute terms (or, in any case, equal to or
below the risk-free rate) means that there is a gap between price and
fair value which identifies the implicit costs incurred by investors.
Example 2.6 clarifies these concepts with the support of graphical
illustrations also confirming the validity of simplified return indi-
cators to highlight the performance risk of elementary products in
alternative to the table of probabilistic scenarios.
Example 2.6. Figures 2.15 and 2.16 show the risk-neutral densities of
two five-year bonds, the former is a flat fixed-coupon bond and the
latter is a floating-coupon bond plus a constant spread. Both bonds
have an average annual credit spread around 40 basis points and
pay an equivalent extra-return over the risk-free rate; consequently,
their fair value is 100: no implicit costs are charged to investors and
the average annualised return is around 2.68%, ie, the same as the
risk-free rate referring to the five-year maturity. The internal rate of
return is higher and equal to around 3.06%, signalling that investors
are rewarded for entering products exposed to the credit risk of the
issuer. However, the two numbers (ie, 2.68% and 3.06%) are similar,
indicating that, the left-side mode being very small, the internal rate
of return can be used as a suitable return indicator. The fixed-coupon
bond exhibits a lower dispersion around the mean value, while the

70

i i

i i
i i

minenna 2011/9/5 13:53 page 71 #103


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

Figure 2.16 Density of a floating-coupon bond with a negligible credit


risk exposure

18,000
16,000
14,000 Average annual return = 2.68% IRR = 3.06%

12,000
1,0000
8,000
6,000
4,000
2,000
0
40 50 60 70 80 90 100 110 120 130 140

floating one has a wider range of possible returns due to the impact
of the volatility of the interest rates on the level of the coupons and
on their compounding until maturity.

Before concluding this section it is worth making some obser-


vations on the requirements, described above, that a product must
satisfy in order to be considered elementary.
These requirements regard the simplicity of the financial engineer-
ing (plain-vanilla structures) and the negligibility of the credit risk
and are aimed at keeping under control the two main risk drivers
which characterise any kind of bond: the volatility of the interest
rates and the possibility that the issuer will not be able to live up to
their commitments towards the investors.
Simplicity of the bond structure ensures that the exposure to inter-
est rate risk is well known and not deformed by the insertion of
interest rate derivatives. Negligibility of the credit risk means that
bondholders are very unlikely to experience losses or, equivalently,
to get negative returns.
Simplicity has already been fully qualified by excluding any
bond which is not zero-coupon, fixed-coupon constant over time
or floating-rate coupon plus a possible spread also constant over
time.
Negligibility of the credit risk deserves more careful assessment.
In this regard a first important indicator is the presence of any

71

i i

i i
i i

minenna 2011/9/5 13:53 page 72 #104


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

subordination clause pending on coupon payments and on cap-


ital redemption at maturity or according to the rules set in the
amortising plan. Subordinated securities definitely include a non-
negligible credit derivative, especially because of the higher losses
suffered in the occurrence of a credit event, and it is self-evident that
they cannot be classified as elementary products.
In addition to subordinated bonds, senior ones must also be suit-
ably scrutinised. Loss severity is only a part of what is generally
qualified as credit risk. A complete assessment of the magnitude of
this risk factor clearly also includes the likelihood of a credit event
materialising, and this likelihood is quite heterogeneous across dif-
ferent issuers and issues. For instance, the risk-neutral density of
the potential performances associated with a senior note issued by
a speculative-grade entity will surely exhibit a strong bimodality,
which makes invalid the above arguments in favour of a simplified
information set and imposes the table of probabilistic scenarios as the
only viable disclosure tool for a meaningful synthetic representation
of the possible final payoff of the investment.
A suitable approach to assessing whether credit risk is effectively
marginal is to look at the curve of the credit spreads of the issuer
until the maturity T of the bond, since they represent the reward
over the risk-free rate demanded by the market to bear the credit
risk of the issuer and, therefore, are an immediate indicator of their
creditworthiness.
Credit spreads are available from several market indicators, such
as quotes and volatilities of credit default swaps and of asset swaps
referring to the same issuer as the bond and for a period equal to
the time horizon of the product, as well as implicit spreads (or dis-
count margins) on securities issued in the same period by the same
subject or by comparable entities and ratings.19 In order to exclude
sudden and spurious changes from the credit risk assessment, these
indicators should not refer to a specific point in time; rather, their
track record should be studied on a recent time interval long enough
to capture the trend of the issuers credit standing and should take
into account both the idiosyncratic and the systematic components
of its variability.
Once credit spreads are known, their annual average calculated
until the maturity of the analysed security is a good indicator of
whether or not the underlying credit risk is material. The maturity of

72

i i

i i
i i

minenna 2011/9/5 13:53 page 73 #105


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

the bond is a key ingredient of this reasoning, since, generally, cumu-


lated default probability increases over time and, consequently, the
probability of getting negative returns also increases over time (at
least for senior unsecured bonds with bullet redemption). In other
words, the same annual average credit spread can be considered to
be negligible over the short term but it can translate into a prominent
exposure over medium-to-long time horizons. In general terms, it is
the pair given by the annual average credit spread and the maturity
which affects the density function of the final payoffs; hence, even
for unsophisticated bonds there is a sort of threshold of these vari-
ables above which the said density ceases to be regular and exhibits
a marked bimodality, meaning that credit risk is no longer negligi-
ble and an enhanced disclosure through the table of probabilistic
scenarios is required. Clearly, this threshold is to be interpreted as
more or less binding depending on the volatility of credit spreads; in
particular, the higher the volatility, the less significant, in statistical
terms, the information on the annual average credit spread, which
consequently calls for a more detailed investigation on the credit
risk of the issuer.

2.5 CLOSING REMARKS


The first pillar of the risk-based approach aims at providing a valid
methodology for quantifying and representing the riskiness of any
non-equity product at two different and equally significant points in
time: the issue date and the final date. In fact, starting from the well-
known theory of the contingent claim evaluation under the risk-
neutral measure, this pillar performs the price discovery not only
at the initial time when the product is offered, but also at maturity.
The initial and final date of the period spanned by the third pillar
of the risk-based approach, namely the recommended investment
time horizon, become therefore essential temporal references for a
proper assessment of the performance risk of the product.
With regard to the issue date, the key information is provided by
the financial investment table. As seen in Section 2.2, the unbundling
of the issue price reported in this table reveals to the investors the fair
value of the product and the total costs they should incur over the
life of the investment. If necessary, as in the case of non-elementary
products pursuing some target return, the financial investment table
also gives the details of the risky and the risk-free components of the

73

i i

i i
i i

minenna 2011/9/5 13:53 page 74 #106


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

fair value. As explained in Section 2.3.1, this additional information


results from an original application of the portfolio replication prin-
ciple and has the advantage of clarifying to the investor how much
of the product is similar to a risk-free security and how valuable the
inherent risks of the financial investment are instead.
Moving to the end date of the recommended time horizon, the
other fundamental indicator provided by the first pillar is the table
of probabilistic scenarios. Risk is uncertainty about a future event.
Consequently, probability is the only quantitative tool able to cap-
ture and to gauge the risk associated with the performances of a
non-equity product at its expiry date. By working on this corner-
stone, Section 2.3.2 introduces the table of probabilistic scenarios,
which takes the risk-neutral density of the final values of the prod-
uct and extracts from it synthetic and essential information about
the performance risk of the investment. All this is made possible
by choosing suitable partition techniques to reduce the granular-
ity of the entire risk-neutral density of the product at maturity.
When the product has to be sold autonomously (ie, not as a sub-
stitute for a pre-existing investment), the partition technique used
to build the probability table requires, as described in Section 2.3.3,
the superimposition with the density of the risk-neutral numraire
(namely a simple but meaningful risk-free cash account process).
The result is a table of few rows and columns that discloses the
probabilities of performing better than, worse than or similarly to
the numraire chosen and of losing a more or less relevant part of the
price paid at the issue date. On the other hand, for non-equity struc-
tures involved in some exchange hypothesis (eg, exchange public
offerings, liabilities restructuring, etc), the trajectory-by-trajectory
technique illustrated in Section 2.3.4 makes a pointwise comparison
of the two alternative opportunities and leads to display the prob-
abilities and the extent to which the new product will beat the old
one and vice versa.
As explained in Section 2.4, in the case of elementary return-target
products featuring unsophisticated financial structures and negligi-
ble credit risk exposures, the assessment of the performance risk
associated with the payoffs achievable at maturity is not particu-
larly difficult for the average investor. This fact, combined with the
simple and regular shape assumed by the risk-neutral density of
these products, indicates that the disclosure of the performance risk

74

i i

i i
i i

minenna 2011/9/5 13:53 page 75 #107


i i

THE FIRST PILLAR: PRICE UNBUNDLING AND PROBABILISTIC SCENARIOS

can be obtained by replacing the probability table with deterministic


return indicators, like the internal rate of return.
After having evaluated the impact of the risks of non-equity prod-
ucts at the beginning and at the end of the recommended investment
time horizon, the next chapter will address, as second pillar of the
risk-based approach (namely, the degree of risk), the issue of robust
measurement and representation of the riskiness of these products
throughout the entire period identified by the recommended time
horizon. For this reason the focus will be on the variability of the
daily potential performance of the product, and this will be analysed
by using a well-known and simple metric of risk: volatility.

1 CPPI and OBPI stand for Constant Proportion Portfolio Insurance and Option-Based Portfolio
Insurance, respectively; they are management techniques specifically aimed at protecting a
certain percentage of the value of the financial investment by combining low-risk assets (which
play the main insurance role) with risky assets (which are used to pursue extra-returns).
2 It is worth noting that at time t = 0 Equation 2.2 assumes an investment of unit value in the
risk-free asset.
3 Similar services affect the riskreturn profile of the investment and its costs regime and, as
explained in Chapter 4, allow for the identification of a minimum time horizon besides the
one which is implicit in the product.
4 Clearly, if the non-equity products provide amortising solutions of capital redemption
Equation 2.4 must be properly rearranged.
5 For details about this technique see Section 2.3.4, which explains the core of the methodology
to carry out a proper probabilistic comparison between two non-equity products involved in
an exchange.
6 The concept of the risk-free asset is close to that of the risk-free security of Definition 2.1,
Section 2.3.1. However, the former does not inherit the schedule of the cashflows of the non-
equity product. This is due to its role of numraire which, in the perspective of an objective
probabilistic comparison, requires it to be the same for any non-equity return-target product.
7 This event is of great importance in determining the minimum recommended investment
time horizon. See Chapter 4.
8 Other characterisations that are line with the principles described may be explored.

9 It is worth noting that if one knows that ST cannot take values lower (higher) than a given
finite value, then the lower (upper) extremum of the integral appearing in the right-hand side
of the first (last) equation of Expression 2.19 has to be determined by replacing (+) with
such a finite value.
10 Clearly, the goodness of this approximation tends to diminish the longer the recommended
time horizon of the product. In fact, when the four mean values reported in the probability
table are referred to quite long maturities, their discounting through a deterministic interest
rates curve tends to lose too much information about the variability associated with the interest
rates curve itself. In similar cases, a more reliable estimate of the fair value of the product is
obtained by discounting back at the issue date any possible final value of the investment over
the corresponding path of the risk-free rate.
11 The same kind of disclosure is also useful in the case where the new non-equity product is
liquid, provided that the investor is interested in an exchange-and-hold strategy.
12 If the two products have different implicit time horizons T1 and T2 with T1 < T2 , the longer
time horizon prevails and the final values at time T1 of the shorter product are compounded
until time T2 at the risk-free rates.

75

i i

i i
i i

minenna 2011/9/5 13:53 page 76 #108


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

13 Clearly, the goodness of this approximation tends to diminish the longer the final maturity of
the two products involved. In fact, when the two mean values reported in the probability table
are referred to quite long time horizons, their discounting through a deterministic interest rates
curve tends to lose too much information about the variability associated with the interest
rates curve itself. In similar cases, a more reliable estimate of the initial theoretical value of
the synthetic swap associated with a specific exchange hypothesis is obtained by discounting
back at the exchange date the final value of any possible difference (ie, DiT , i = 1, 2, . . . , m)
over the corresponding path of the risk-free rate.
14 Other characterisations that are in line with the principles described may be explored.

15 It is worth noting that if it is known that DT cannot take values lower (higher) than a given
finite value, then the lower (upper) extremum of the integral appearing in the right-hand side
of the first (last) equation of Expression 2.24 has to be determined by replacing (+) with
such finite value.
16 Clearly, in these cases the descriptions of the events considered in the table of probabilistic
scenarios (see Table 2.11) and in the table of the conditional values on the tails (see Table 2.14)
need to be properly revised. For an example see Section 5.6.
17 The other element which can affect the shape of the risk-neutral density are predetermined
amortising plans which typically increase the dispersion of the final payoffs distribution.
18 Typically, the internal rate of return is slightly above this mean value exactly because this
implicit assumption does not correspond perfectly to reality.
19 It is worth noting that ratings alone can be weak information to rely on. The reason is that
ratings are often based on very long historical time series, a feature that causes an inherent
inertia of the ratings. As a consequence, the deriving credit risk measurements can be poorly
related with the actual financial conditions of the issuer at a time close to the issue date.

76

i i

i i
i i

minenna 2011/9/5 13:53 page 77 #109


i i

The Second Pillar: Degree of Risk

The second pillar of the risk-based approach is the degree of risk


that expresses a synthetic indication of the overall riskiness of the
product over the entire recommended investment time horizon.1
The degree of risk completes the information on the risk pro-
file of non-equity products provided by the first pillar (the latter
being focused only at the beginning and at the expiry of the recom-
mended time horizon) and its determination requires an analysis
of the variability of the same trajectories of the products potential
performances used to make the initial pricing (inside the financial
investment table) and the pricing at maturity (inside the table of
probabilistic scenarios).
In order to allow an immediate reading and a clear understanding
of this indicator by retail investors, the quantitative analysis behind
its calculation must yield a suitable representation in qualitative
terms.
The solution adopted in this work is to allow the degree of risk
to take values inside a set of n risk classes that are denoted by
clearly comprehensible adjectives and sorted in increasing levels of
riskiness on the basis of the results of this quantitative analysis.
In this context, this chapter describes a methodology to determine
the degree of risk of non-equity products at the issue date and to
constantly monitor the validity of the information conveyed by this
indicator during the recommended investment time horizon. This
methodology is also used to detect the occurrence of migrations,
meaning transitions from one risk class to another, and, hence, to
promptly identify the situations in which it is necessary to update
the information given to investors.
Once a risk metric is chosen, this methodology leads to the defini-
tion of a grid of consecutive intervals of the metric adopted that satis-
fies some optimality requirements on the number of the risk classes
and on the extremes of each interval. Such a grid is valid for any

77

i i

i i
i i

minenna 2011/9/5 13:53 page 78 #110


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

non-equity product and, therefore, can be used in a straightforward


manner by every issuer.
The degree of risk of the product is initially identified through the
choice of the risk class that the issuer deems to best match the specific
features of the financial engineering (and, if necessary, of the invest-
ment policy) of the products over the recommended time horizon.
During this horizon the issuer uses suitable metrics of the variabil-
ity of potential returns, defined consistently with their proprietary
risk measurement and management models, to monitor any possi-
ble migration of the degree of risk to a different risk class in order
to be able to update the information provided by this indicator, and,
perhaps, also by the other two pillars of the risk-based approach.
Indeed, in general terms, migrations of the degree of risk affect both
the potential returns of the investment and, together with the costs
charged, the recommended time horizon.2
The risk metric used to measure the variability of the possible
returns of the product over its time horizon is the volatility. This
choice reconciles the need for representativeness and simplicity with
the requirement for minimising subjective assumptions and compu-
tational difficulties. In fact, volatility combines ease of calculation,
the ability to express the actual riskiness of a product and a strong
affinity with the other existing metrics, like the drawdown, the max-
imum drawdown, the value-at-risk or the expected shortfall. Ulti-
mately, the theoretical simplicity of the volatility makes it the best
tool to ensure an objective risk measurement and also a fair com-
parison across products when faced with quite unconventional and
complex financial structures.
More precisely, the annualised volatility of the daily returns of the
financial product is adopted. It is important to observe that, by con-
struction, the basis chosen to calculate the returns, together with the
extent of the data used (ie, the number of years over which returns
are observed), affects the possible values of the volatility. For a given
number of years, a longer basis leads, in fact, to a lower number of
return observations and a higher smoothness of the risk metric used
(and vice versa), with a consequent impact on the outcomes of the
procedure to calibrate the grid of volatility intervals and also on the
detection of the migrations between different risk classes.
It follows that, if issuers have internal risk measurement and
management models based on the annualised volatility of returns

78

i i

i i
i i

minenna 2011/9/5 13:53 page 79 #111


i i

THE SECOND PILLAR: DEGREE OF RISK

observed at a lower frequency (eg, weekly or monthly) or on metrics


other than pure volatility, the validity of the calibration procedure
described in this chapter is no longer guaranteed. Hence, it can
become necessary to perform new specific calibrations of the optimal
grid of volatility intervals and to implement new criteria to identify
migrations in accordance with the internal models adopted and also
with the requirements qualifying the methodology presented below.
This methodology relies on the general argument that the width
of volatility intervals has to be carefully calibrated (also with respect
to the total number of intervals) in order to meet some optimality
requirements that arise from the need to preserve the relationship
existing between risk, losses and volatility, and to ensure at the same
time the representativeness of each class in terms of robustness and
homogeneity of the grid. Otherwise there could be some regulatory
incentive for people who engineer the product or manage the assets
in their portfolio to prefer one class over another.
Since more risk means more volatility and hence also the possi-
bility of higher losses, the absolute width of the volatility intervals
must be increasing.
In order to safeguard the representativeness of each risk class, this
requirement cannot be pursued according to naf solutions (ie, those
which are not backed by robust quantitative approaches or which
are not consistent with the fundamental relation between risk, losses
and volatility) which polarise the riskiness of the products, with
some classes too wide and others too narrow, structurally leading to
a different exposure to events of migration.
Volatility intervals must allow effective discrimination between
products with heterogeneous risk profiles and they also have to
ensure the stability of the degree of risk in the face of changes in the
value of the product caused by normal movements in the reference
markets or by the specific structuring solutions or by the ordinary
activity of the asset manager, if any. This means that, depending on
the product considered, the width of any interval must be able to
accommodate the working of the quantitative algorithms behind its
financial engineering or the short-term changes in the investment
policy aimed at achieving the target declared to investors, avoiding
spurious signals of migration but also not missing the true ones.
Excessively narrow intervals would be oversensitive to normal
market movements or, in the case of products actively managed

79

i i

i i
i i

minenna 2011/9/5 13:53 page 80 #112


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

over time, like mutual funds or unit-linked policies, to reallocations


in the portfolio of invested assets due to the decisions of an asset
manager who is operating in line with their mandate. On the other
hand, overly large intervals could determine the concentration of
products quite different in terms of riskiness on a limited number of
qualitative classes.
Besides the requirement of an increasing absolute width, the opti-
mal grid must exclude the possibility of artificial migrations in the
degree of risk (robustness requirement) and it must also ensure that
products belonging to different volatility intervals have basically
the same exposure and a coherent reaction to structural and sys-
temic market shocks and, consequently, also to the risk of migrating
into another risk class (homogeneity requirement).
The transition to classes more or less risky than the original one has
two main causes. The first derives from the deliberate intention of
modifying the target risk of the product or, for other types of financial
structures, the investment policy actually implemented with respect
to the one initially declared. The second cause is the impact on the
products volatility of significant and sudden changes in market
conditions, which corresponds to what can be technically consid-
ered as the migration risk. It follows that a calibration that does not
care about this aspect could favour opportunistic behaviour, and,
specifically, the selection of the volatility intervals less exposed to the
migration risk, which give the appearance of a more stable financial
investment.

3.1 METHODOLOGY TO CALIBRATE AN OPTIMAL GRID


In order to identify the optimal grid of volatility intervals, in this
chapter a calibration methodology is defined and implemented that
makes use of suitable simulative and predictive models, which, con-
sistent with the formalisation of the first and third pillars of the risk-
based approach (see Chapters 2 and 4), rely on a forward-looking
logic.
Given the full space of the volatilities available on the market, the
optimality requirements presented in the previous section identify,
in quantitative terms, a complex problem of stochastic non-linear
programming defined according to specific admissibility conditions.
These conditions ensure that the risk budget associated with any
volatility interval is actually sustainable on the market (so-called

80

i i

i i
i i

minenna 2011/9/5 13:53 page 81 #113


i i

THE SECOND PILLAR: DEGREE OF RISK

market feasibility). It means that, except in a few extreme cases,


the dynamics of hypothetical products managed by different asset
managers, each one compliant with a predetermined risk budget (so-
called automatic asset managers), are in line with the expectations
that can reasonably be attained on the market.
The methodology used for the second pillar first introduces a
model to describe the behaviour of an automatic asset manager
who takes their investment decisions each day in order to respect
a specific risk budget agreed with their board in terms of a given
volatility interval. Hence, any product is assumed to be managed
by an asset manager; this is a simplifying assumption that does
not affect the validity of the optimal grid resulting from the cal-
ibration methodology for the entire universe of non-equity prod-
ucts, including those which are not managed by a person who
can change the elementary assets constituting the product over
time.
A further modelling assumption is used to simulate, under the
risk-neutral measure, the evolution of the volatility realised by
a hypothetical product managed by an automatic asset manager
operating on the above-mentioned risk budget.
The adoption of a forward-looking simulative framework instead
of considering data empirically observed on a sample of products
with a given risk budget comes from the fact that, regardless of the
sample size, what happened in the past is not necessarily the result
of asset allocation decisions compliant with suitable procedures of
risk management. In other words, the data generation process of
historical data could include, at least in part, observations produced
either on the basis of unsuitable risk measurement and management
procedures or in violation of effective procedures. On the other hand,
the forward-looking simulative approach allows the limitation of the
noise factors of the analysis to only exogenous market phenomena,
which can be properly handled by the models, and the exclusion
of the endogenous distortions that could come from elusive or just
inefficient management practices.
The market feasibility is, in a first approximation, an indicator
that the choices made by the automatic asset manager (optimised by
construction with respect to a given risk budget) bring the product
to realise volatilities which do not differ significantly from normal
market expectations.

81

i i

i i
i i

minenna 2011/9/5 13:53 page 82 #114


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

In general terms, the market uses the information on the realised


volatilities until a given day to make a forecast on expected volatil-
ity for the next day. From this perspective, a precise risk budget is
market feasible when the outcome of the management activity in
terms of volatility is consistent with the market forecast.
This consistency is achieved if the realised volatility exhibits
only relatively few deviations (so-called management failures) with
respect to market expectations on this variable, and these devia-
tions are attributable to the occurrence of significant and unexpected
shocks on the reference markets.
If, however, the number of management failures is too high, it
means that, in order to meet their risk budget, the automatic asset
manager takes asset allocation decisions which are typically incom-
patible with the normal evolution of the markets and the expec-
tations that are formed in these markets, thus indicating that the
volatility interval associated with that budget is not feasible and
that its extremes should be revised.
It is important to note that the calibration of the optimal grid
requires a comprehensive and contextual assessment of the market
feasibility of all the risk budgets in which to divide the space of the
possible volatilities. It follows that the solution to the problem of
interest can be defined as a Pareto optimum: a revision of each risk
budget is also reflected on the others, hence influencing their market
feasibility, which, as anticipated, must equally belong to all intervals
of the optimal grid.
Broadly speaking, the calibration procedure can be summarised
in the following steps.

1. A model is defined that describes the behaviour of an auto-


matic asset manager who makes their investment choices to
meet a risk budget expressed by a specific volatility interval
which they have contracted with their board.
2. A model is assumed to describe the dynamics of a product
managed by the automatic asset manager of step 1, and to sim-
ulate the time evolution of the realised volatility of a hypothet-
ical non-equity product pursuing a target risk which coincides
with the risk budget of the automatic asset manager.
3. To describe the way in which the market develops its expec-
tations about future volatility, a predictive model is defined

82

i i

i i
i i

minenna 2011/9/5 13:53 page 83 #115


i i

THE SECOND PILLAR: DEGREE OF RISK

that, using a fairly limited number of observations of realised


volatility, is then able to estimate, under the real-world mea-
sure, an interval of values that this financial variable can
assume with a reasonably high level of confidence.
4. Following the predictive model referred to in step 3, the def-
inition of management failure is introduced and the stochas-
tic non-linear programming problem for the calibration of the
optimal grid of volatility intervals is formalised.
The solution to this problem uniquely identifies the optimal num-
ber of volatility intervals and the optimal extremes for each inter-
val in accordance with the requirements of increasing absolute
width, representativeness of each interval and discriminatory power
between products with heterogeneous risk profiles.
Through a constant corresponding to the relative width of the
intervals, the solution also leads to close relationships between the
intervals (except the first and last) that ensure the fulfilment of the
robustness and homogeneity requirements. In particular, all the opti-
mal intervals placed in intermediate positions are associated with a
limited and homogeneous incidence of management failures. Con-
sequently, the optimal grid is market feasible in its entirety, in the
sense that, apart from marginal differences, all intervals show a small
and substantially equal number of outliers, properties that, as stated
above, prevent spurious migrations and exclude ex ante opportunis-
tic conducts by managers keen to minimise their exposure to the
migration risk.
The volatility intervals obtained upon completion of the cali-
bration procedure are therefore optimised for risk-target products,
meaning that, by construction, they present the least possible expo-
sure to the migration risk. Moreover, the same grid of optimal
intervals is also an objective reference for the determination of the
degree of risk in the case of return-target and benchmark financial
structures, favouring the immediate comparability between the risk
profiles of all non-equity investment products.

3.2 THE MODEL FOR THE AUTOMATIC ASSET MANAGER


The behaviour of an automatic asset manager who has a risk
budget expressed in terms of an interval of annualised volatili-
ties [min,AAM , max,AAM ] can be represented through an appropriate
calibration of a stochastic model for the volatility.

83

i i

i i
i i

minenna 2011/9/5 13:53 page 84 #116


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

To meet their management constraint, the automatic asset man-


ager acts with the aim of avoiding arriving too close to the extremes
of the interval. On the one hand they favour volatility values close
enough to the average of the interval [min,AAM , max,AAM ], and on
the other hand, if market conditions should excessively depress or
increase the volatility of their portfolio, the automatic asset manager
takes action to promptly return it to the aforementioned mean value.
The behaviour of this automatic asset manager can be formalised
as follows.

Proposition 3.1. The automatic asset manager with a risk budget


[min,AAM , max,AAM ] is described by an appropriate parametric cal-
ibration of the following stochastic differential equation under the
risk-neutral probability measure Q

dt2 = [ t2 ] dt + vt dW2,t (3.1)

which governs the stochastic process of the variance, {t2 }t0 , of the
product managed by the automatic asset manager and where

{W2,t }t0 is a standard Brownian motion under Q,


, and v are the parameters which identify, respectively, the
speed, the long-run mean and the volatility of volatility (vol-
of-vol) of the stochastic process of {t2 }t0 and which must be
properly calibrated,
the initial condition 02 has a Gamma probability density
function with parameters and v2 /2 , ie
 
v2
02 ,
2

To calibrate the parameters and in the right-hand side of


Equation 3.1 the following proposition is applied.

Proposition 3.2. In the model of Proposition 3.1 describing the


behaviour of the automatic asset manager, the parameters and
of Equation 3.1 satisfy the following equalities
2
= mean ,AAM (3.2)
=2 (3.3)

where mean,AAM = 12 (min,AAM + max,AAM ).

84

i i

i i
i i

minenna 2011/9/5 13:53 page 85 #117


i i

THE SECOND PILLAR: DEGREE OF RISK

Intuitively, the parametric choice expressed by Equation 3.2


ensures the mean reversion of the variance to the square of the aver-
age value of the volatility interval, while Equation 3.3 precludes
systematic preferences of the automatic asset manager for volatility
values higher or lower than the average of the interval. These para-
metric choices affect the stationary (and asymptotic) distribution of
the stochastic process of the variance, which is a Gamma distribu-
tion with parameters ( , v2 /2) (Revuz and Yor 1999), making it
symmetrical about its long-run mean. In particular, the latter prop-
erty requires the use of a value of high enough to compensate for
the positive skewness typical of square-root processes, like that of
Equation 3.1.
The last aspect that characterises, by assumption, the behaviour
of the automatic asset manager is the ability to comply at all times,
unless in exceptional (and, in any case, negligible) circumstances,
with their risk budget and it is reflected in the choice of the parameter
of vol-of-vol, v. In this respect, the following proposition holds.
Proposition 3.3. In the model of Proposition 3.1, set
2
= mean ,AAM , =2

The parameter v is uniquely determined to satisfy the following


equation
    
2 v2 2 v2
F min,AAM ; , + 1 F max,AAM ; , = 1% (3.4)
2 2
where F (x; a, b) is the cumulative probability distribution evaluated
at x of a random variable that has a probability law of Gamma type
with parameters a and b, and it has the following functional form
(a, bx)
F (x; a, b) =
(a)
where (a, bx) is the lower incomplete gamma function.
The choice of defining the parameter v according to Equation 3.4
is used to obtain a stationary distribution of the truncated variance
with a level of approximation of 99% on the extremes of the volatility
interval that represents the management constraint for the automatic
asset manager.3
The above criteria for choosing the parameters , and v ensure
that, as stated above, the stationary distribution of the volatility is

85

i i

i i
i i

minenna 2011/9/5 13:53 page 86 #118


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 3.1 Stationary density of the volatility: behaviour of an


automatic asset manager with a risk budget equal to [5%,20%]

80,000
70,000
60,000
50,000
40,000
30,000
20,000
10,000

0 5 10 15 20 25
(%)

substantially symmetrical around mean,AAM and it exhibits a grad-


ual decay of the probability mass as it gets closer to the tails according
to a classical bell shape that assigns a lower probability to the values
which are closest to the extremes of the interval.
The model presented in this section applies to any volatility inter-
val and, therefore, it works regardless of the realism of a given risk
budget as well as of its market feasibility, the latter being an aspect
that, as explained later, requires the analysis of the management fail-
ures of each interval together with those of the other intervals of any
grid in which it is contained.
Figure 3.1 illustrates the stationary probability distribution of the
volatility simulated using the stochastic differential equation 3.1
and the above parametric choices; it represents the behaviour of
an automatic asset manager operating on the volatility interval
[5%, 20%].
As shown by Figure 3.1, the model for the automatic asset man-
ager assumes that the whole predetermined risk budget is used, so
as to exclude from the analysis the effect of specific management
decisions (eg, to keep the volatility in a proper subset of the inter-
val [min,AAM , max,AAM ]). This is equivalent to the assumption of
full compliance of the theoretical behaviour of this operator with
the mandate to operate effectively throughout the risk budget at
their disposal. This element is particularly important because, as
discussed in Section 3.5, the width of the intervals is critical for the
calibration of the optimal grid.

86

i i

i i
i i

minenna 2011/9/5 13:53 page 87 #119


i i

THE SECOND PILLAR: DEGREE OF RISK

3.3 THE MODEL TO SIMULATE THE VOLATILITY


Given the model described in Section 3.2, through a further mod-
elisation, the evolution of a hypothetical product, managed by an
automatic asset manager with a risk budget corresponding to the
volatility interval [min,AAM , max,AAM ], can be described.
For this purpose it is sufficient to supplement Equation 3.1 with
a stochastic differential equation that governs the stochastic pro-
cess for the value of the product managed by the automatic asset
manager. This is achieved by the following proposition.
Proposition 3.4. The time evolution of a product managed by an
automatic asset manager with a risk budget [min,AAM , max,AAM ] is
governed by the following pair of stochastic differential equations
under the risk-neutral measure Q

dSt = rSt dt + t St dW1,t (3.5)


dt2 = [ t2 ] dt + vt dW2,t (3.1)

where
Equation 3.5 describes the stochastic process of the product
value, {St }t0 ,
r is the risk-free rate,4
t2 is the variance of the process {St }t0 and its stochastic
process is governed by the stochastic differential equation 3.1
whose parameters , and v are calibrated as described in
Section 3.2,
{W1,t }t0 and {W2,t }t0 are two standard Brownian motions
under Q and are linked by a correlation coefficient (ie,
dW1,t dW2,t = dt).5
Remark 3.5. The pair composed of the stochastic differential equa-
tions 3.5 and 3.1 is known as the Heston model (Heston 1993).
Through the model of Proposition 3.4 it is possible to simulate
the time evolution of the annualised volatility of daily returns of the
product managed by an automatic asset manager with a risk budget
[min,AAM , max,AAM ].
Indeed, once this interval is known, the model identified by the
stochastic differential equations 3.5 and 3.1, where the parameters
are chosen in accordance with Equations 3.23.4, allows simulation,

87

i i

i i
i i

minenna 2011/9/5 13:53 page 88 #120


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

on a daily basis and over a period of T years, of m N trajectories


of the process {St }t0 .
Each trajectory is composed of N + 1 observations (with N = 252T)
relative to the time instants tk defined as

tk = k t (3.6)

where t = 1/252 and k = 0, . . . , N. Correspondingly there exist


(i)
m N trajectories of daily returns, denoted by {Rtk }k=1,2,...,N;i=1,2,...,m ,
composed of N observations and such that
(i ) S tk
Rtk = ln (3.7)
Stk t
Starting from the m trajectories of returns obtained in this way and
setting the width of the time window of observed returns used to
calculate a value of volatility, the corresponding m trajectories of
annualised volatility are uniquely determined. In fact, the following
proposition holds.
(i )
Proposition 3.6. Given m N trajectories of daily returns Rtk
associated with the ith trajectory of a product with a risk budget
[min,AAM , max,AAM ] and determined according to Equation 3.7, and
given the width of the time window for observing the returns
needed to calculate a value of volatility, m N trajectories of annu-
alised volatility are uniquely determined. Each trajectory is com-
posed of H = N + 1 = 252T + 1 observations of volatility
calculated using the following formula


252 k (i) (i) 2
(i,[min,AAM ,max,AAM ]) j=k +1 (Rt j Rtk )
k =

k = , . . . , N; i = 1, 2, . . . , m (3.8)

where
(i) 1 
k
(i )
Rtk = Rt j (3.9)
j=k +1

The width of the time window for observing the returns sig-
nificantly affects the results of the calibration procedure and also,
consequently, the setting of appropriate rules for the identification
of migrations between classes of risk.
In fact, for the same N, and hence T, a higher (lower) value of
determines, through the equality H = N + 1 = 252T + 1 , a

88

i i

i i
i i

minenna 2011/9/5 13:53 page 89 #121


i i

THE SECOND PILLAR: DEGREE OF RISK

Figure 3.2 Simulated trajectories of the volatility of a product with a risk


budget [5%,20%]

25

20

15
(%)

10

0
1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0
Time

lower (higher) length of the trajectories of annualised volatility, and


also affects the value of any single observation.
In this work, the time window for observing returns is annual
(ie, = 252 days), which is consistent with the standard period
of validity of the precontractual disclosure documentation of non-
equity products and represents a reasonable compromise between
the benefit in terms of greater wealth of information deriving from
the use of a sufficiently high number of observations and the lesser
significance of the data further back in time.6
Consequently, the number H of observations of annualised vola-
tility for each trajectory is equal to 252(T 1)+1. In order to obtain the
minimum number of observations of volatility at which there are two
values calculated using completely separate observation windows,
T is set equal to two years, corresponding to 253 volatilities for each
trajectory.
Figure 3.2 shows, for example, the evolution of m = 100 trajecto-
ries of annualised volatility realised by a product with a risk budget
equal to [5%, 20%].

3.4 THE PREDICTIVE MODEL FOR THE VOLATILITY


The model in Section 3.2 describes the behaviour of an automatic
asset manager with a risk budget corresponding to a given volatil-
ity interval [min,AAM , max,AAM ]; moreover, the model described in

89

i i

i i
i i

minenna 2011/9/5 13:53 page 90 #122


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Section 3.3 assumes that there is a hypothetical product consistent


with that risk budget and therefore allows simulation of the returns
and the realised volatility over time.
The study of the market feasibility of such a product is carried out
by examining the incidence of its management failures, ie, the cases
where the realised volatility obtained via simulation is different from
the expectations drawn from the market on the future behaviour of
this financial variable.
Indeed, while it is natural to find management failures associated
with significant and sudden shocks of market conditions (as the
same expectations of the market are made without knowledge of
these events), it is equally natural to assume that shocks of this type
are infrequent and thus, also, that the management failures made by
the automatic asset manager should be relatively few.
Instead, when the incidence of failures is high, the failures cannot
be attributed entirely to the effect of significant and sudden changes
in market conditions. A similar situation indicates that the choices
of asset allocation made by the automatic asset manager to meet
their risk budget are too often incompatible with the normal trends
observed in the markets and with the expectations that the markets
themselves develop on the basis of these trends, pointing out that
the risk budget is not market feasible.
The study of the market feasibility is conducted through a pre-
dictive model that, using the information contained in a relatively
small number of realised values of annualised volatilities, returns
a prediction interval of the future volatility with a reasonably high
confidence level and under the real-world measure.
Reasoning in a bitemporal logic, on day k the automatic asset
manager takes their portfolio rebalancing decisions and selects the
assets to invest or to abandon (even if only in part).
The volatility that will occur on day k + 1 will fall within the
prediction interval established by the market at k, or, alternatively,
it will be outside this interval, resulting in a management failure.
In the model used to simulate the volatility (see Section 3.3) the
influence of the specific market conditions is reflected in the value
taken by the standard Brownian motion {W1,t }t0 appearing in the
right-hand side of Equation 3.5, which is distributed according to a
normal law with zero mean and variance t. According to this model,
the main shocks correspond to the realisation of values placed on the

90

i i

i i
i i

minenna 2011/9/5 13:53 page 91 #123


i i

THE SECOND PILLAR: DEGREE OF RISK

tails of the normal distribution and therefore are rare but of relevant
size.
As mentioned above, in formulating its forecasts on the future
volatility the market has a sigma algebra that incorporates the infor-
mation recorded up to day k. By construction, these forecasts can-
not incorporate what happens in the period between day k and the
next day. This lack of information becomes more relevant if the
events occurring in this period are capable of significantly influ-
encing the volatility that will be realised on the day k + 1, and
thus creating the conditions for the occurrence of a management
failure.
However, given the low probability associated with shocks of
large magnitude, if the number of management failures is too high,
then (at least in part) it is abnormal, as it indicates that the consid-
eration of their risk budget requires the automatic asset manager to
put in place a management strategy which is anomalous compared
with the reasonable expectations of the market.
The predictive model used in this work is the diffusion limit of the
M-Garch(1,1) of Geweke, Pantula and Mihoj.7 There are two reasons
for the choice of this model.
First, the definition of prediction intervals starting from stochas-
tic differential equations can produce robust estimates of the future
volatility by using a few observations of the past values of this
financial variable.
Since solutions based on stochastic difference equations, such as
those that characterise traditional Garch models, operate in discrete
time, they provide reliable forecasts of the volatility conditioned on
having a sufficiently high number of observations. In the presence of
a few observations, however, they would pose problems of statistical
significance or of computational difficulties.
In contrast, the forecasts made by continuous-time models have
the advantage of performing well, even when working on poorer
sigma algebras, because they extrapolate the information on the
recent behaviour of the variable of interest so as to allow a prompt
revision of the width and the levels of the prediction intervals (so-
called adaptivity), and hence to exclude effects known as echoes of
the markets, defined as the persistence of wrong predictions due to
the lack of an embedded updating of the bounds of the prediction
intervals.8

91

i i

i i
i i

minenna 2011/9/5 13:53 page 92 #124


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

The second reason is related to the choice of a specific Garch


model, the M-Garch(1,1). This choice follows from the fact that, as
outlined in Section 3.4.2, the distributive properties of the diffusion
process to which the M-Garch(1,1) weakly converges are known and,
therefore, it is possible to define in closed form confidence intervals
for the prediction of the volatility at a certain future date.

3.4.1 The diffusion limit of the M-Garch(1,1)


The M-Garch(1,1) model is defined as follows.
Definition 3.7. Let ( , , P) be a probability space and let {k }kN
be a filtration on this space generated by the sequence of indepen-
dent and identically distributed (iid) random variables {ln |Zk |}kN ,
where k N is the discrete time indicator and {Zk }kN is a sequence
of iid random variables with a standard normal distribution on R.
Also let {ln k2 }kN be a discrete stochastic process defined on the
above probability space and such that ln k2 : N( , ) (R, B(R)).
Then, a multiplicative Garch model of order (1,1), also called M-
Garch(1,1), is defined by the following stochastic difference equation
with respect to the filtration {k }kN
(k ) (k ) (k )
ln k2+1 = 0 + 1 ln k2 + 1 ln Z2k (3.10)

or equivalently
(k ) (k ) (k )
ln k2+1 = 0 + 1 ln k2 + 21 ln |Zk | (3.11)
(k ) (k ) (k )
where 0 and 1 (with 1 > 0) are two parameters whose super-
script indicates that they refer to the discrete process {ln k2 }kN and
the initial condition of this process, ie, ln 02 , is known and is equal
to l0 .
For ease of reference, Equation 3.11 is rewritten in differential
terms
(k ) (k ) (k )
ln k2+1 ln k2 = 0 + (1 1) ln k2 + 21 ln |Zk | (3.12)

given the initial condition (equal to the constant l0 ) and the fact
that {Zk }kN is a sequence of iid normal random variables with
zero mean and unit variance, the process {ln k2 }kN is a discrete
Markov chain (Karatzas and Shreve 2005) with respect to the filtra-
tion {k }kN . The pair (R, B(R)) defines the measurable space of
{ln k2 }kN , where B(R) is the Borel sigma algebra on R.

92

i i

i i
i i

minenna 2011/9/5 13:53 page 93 #125


i i

THE SECOND PILLAR: DEGREE OF RISK

Figure 3.3 Rescaling of the discrete process {ln k2 }kN

k k+1

k + 0h k + 2h k + 4h k + 6h ... k + h(1/h)

k k+1

2
ln k ln k2 + 1

ln k2 + 0h ln k + 3h
2
ln k + 6h
2 ... ln k + h(1/h)
2

2
ln k ln k2 + 1

Each discrete Markov process defined in this way is identified


by the initial distribution p0 () and by the transition probability
p1,k (, ),9 both defined on (R, B(R)). In particular, for all B(R),
1. P(ln 02 ) = p0 ( ),
2. P(ln k2+1 | k1 ) = P(ln k2+1 | ln k2 )
= p1,k (ln k2 , )10
In the specific case considered here, since the initial condition of
the Markov chain is known, the initial probability distribution p0 ()
is such that
1 if ln 2 = l0
0
p0 (ln 02 ) = (3.13)
0 otherwise

At this point the discrete Markov process {ln k2 }kN is rescaled


2
by defining for all h > 0 a new discrete Markov process {ln kh }kh0
with respect to the filtration {kh }kh0 generated by the sequence of
iid random variables {(ln |Z|)kh }kh0 , where kh is the new discrete
time indicator. In other words, the k time intervals are divided into
1/h parts of width h, as shown in Figure 3.3.
2
In particular, the new discrete Markov process {ln kh }kh0 is
described by the following stochastic difference equation
ln (2k+1)h ln kh
2 2
= 0h + (1h h) ln kh + 21h h1 (ln |Z|)kh (3.14)

The pair (R, B(R)) defines the measurable space of the process
2
{ln kh }kh0 , where B(R) is the Borel sigma algebra on R. Each dis-
crete Markov process defined in this way is identified by the initial

93

i i

i i
i i

minenna 2011/9/5 13:53 page 94 #126


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

distribution p0 () and by the transition probability ph,kh (, ),11 both


defined on (R, B(R)). In particular, for all B(R),
1. P(ln 0h
2
) = p0 ( );
2. P(ln (2k+1)h | (k1)h ) = P(ln (2k+1)h | ln kh
2
)
2
= ph,kh (ln kh , ). 12

In the specific case considered here, since the initial condition of


the Markov chain is known, the initial probability distribution p0 ()
is such that
1 if ln 2 = l0
2 0h
p0 (ln 0h )= (3.15)
0 otherwise

Moreover, the following lemma holds.


Lemma 3.8. Let k N be the discrete time indicator and let
{(ln |Zk |)}kN be a sequence of iid continuous random variables
which constitute the innovation term of the discrete stochastic
process defined by Equation 3.11 or, equivalently, by Equation 3.12.
Each unitary time interval, ie, [k, k + 1], is rescaled into 1/h
sub-intervals of width h, h > 0, and {(ln |Z|)kh }kh0 denotes the
corresponding sequence of iid of continuous random variables.
Then, if {(ln |Z|)kh }kh0 is defined as
 
(ln |Z|)kh = h ln |Zk | + (h h)E(ln |Zk |) (3.16)
the following relations hold
E((ln |Z|)kh ) = hE(ln |Zk |) (3.17)

var((ln |Z|)kh ) = h var(ln |Zk |) (3.18)

E([(ln |Z|)kh E((ln |Z|)kh )]3 ) = h3/2 E([ln |Zk | E(ln |Zk |)]3 )
(3.19)

Proof From Equation 3.16 it follows that


 
E((ln |Z|)kh ) = E( h ln |Zk | + (h h)E(ln |Zk |))
which, by applying some well-known properties of the expected
value and simplifying, becomes
E((ln |Z|)kh ) = hE(ln |Zk |) (3.17)
From Equation 3.16 it also follows that
 
var((ln |Z|)kh ) = var( h ln |Zk | + (h h)E(ln |Zk |))

94

i i

i i
i i

minenna 2011/9/5 13:53 page 95 #127


i i

THE SECOND PILLAR: DEGREE OF RISK

which, by applying some well-known properties of the variance and


simplifying, becomes

var((ln |Z|)kh ) = h var(ln |Zk |) (3.18)

Again from Equation 3.16 it also follows that

E([(ln |Z|)kh E((ln |Z|)kh )]3 )


 
= E([ h ln |Zk | + (h h)E(ln |Zk |) E((ln |Z|)kh )]3 )

which, using Equation 3.17, is equal to


 
E([ h ln |Zk | + (h h)E(ln |Zk |) hE(ln |Zk |)]3 )

By simplifying and taking the constant h out of the expected value,
this becomes

E([(ln |Z|)kh E((ln |Z|)kh )]3 ) = h3/2 E([ln |Zk | E(ln |Zk |)]3 )

which is Equation 3.19.

The following are preliminary results and concepts which are use-
ful for stating the theorem of weak convergence of the M-Garch(1,1)
for the corresponding diffusion process.
Definition 3.9. The Skorokhod space is the space that goes from
[0, +[ to R with continuous paths to the right and finite paths to
the left, ie,

D = D([0, +[, R)
:= { f : ([0, +[ R) : for all t  0, f (t+ ) = f (t) and f (t ) exists}
(3.20)

Theorem 3.10. For any h > 0 let ( , , Ph ) be a probability space


where the probability measure Ph satisfies the following three
relations
h
Ph (ln 02 ) = p0 ( ) for all B(R) (3.21)
h
Ph (ln t2 = ln kh
2
, kh  t < (k + 1)h)
= 1 a.s. under Ph for all k N (3.22)

Ph (ln (2k+1)h | (k1)h ) = Ph (ln (2k+1)h | ln kh


2
)
2
= ph,kh (ln kh ,) (3.23)

95

i i

i i
i i

minenna 2011/9/5 13:53 page 96 #128


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Then, Ph uniquely defines, with respect to the filtration {ht }t0 , the
h
jump-continuous Markov process {ln t2 }t0 associated with the
2
discrete Markov process {ln kh }kh0 , where
h
ln t2 : [0, +[ ( , ) (D, B(D)).

Theorem 3.10 allows the definition of the jump-continuous Mar-


h
kov process {ln t2 }t0 starting from the rescaled discrete Markov
2
process {ln kh }kh0 through the concepts of jump time and holding
time. Intuitively, since the future is independent of the past, given
h
the present, the period for which the process {ln t2 }t0 will remain
in a particular state of the world must be independent of the amount
of time that the process has already spent in that state. In addition,
2
the discrete Markov process {ln kh }kh0 corresponds, in substance,
h
to observing the jump-continuous Markov process {ln t2 }t0 only
when it jumps (ie, at the so-called jump time denoted by Jkh ), and
not for the whole period of maintenance of value (ie, the so-called
holding time denoted by Tkh ).
The three relations 3.21, 3.22 and 3.23, satisfied by the probabil-
ity measure Ph ensure the points above and, therefore, also that
h
the definition of the process {ln t2 }t0 starting from the process
2
{ln kh }kh0 for kh  t < (k + 1)h (where t is the continuous-time
indicator) is well posed.
In fact, these relations uniquely identify
the jump time Jkh as
 
t
Jkh = kh = h for all k N and all kh  0 (3.24)
h
where [t/h] indicates the integer part of t/h,
h
the width of the holding time Tkh during which ln t2 = ln kh
2

as

Tkh = (k + 1)h kh for all k N and all kh  0 (3.25)

From the foregoing the deterministic functions 0h and 1h remain


unchanged.
Figure 3.4 provides a qualitative representation of the relation
2
between the process {ln kh }kh0 (denoted by the points) and the
2h
process {ln t }t0 (denoted by the horizontal segments).
h
By construction, {ln t2 }t0 takes values on D and Ph defines
the probability space of this jump-continuous Markov process. It is

96

i i

i i
i i

minenna 2011/9/5 13:53 page 97 #129


i i

THE SECOND PILLAR: DEGREE OF RISK

h
Figure 3.4 Relation between the processes {ln kh2 }kh0 and {ln t2 }t0

0 h 2h 3h 4h 5h 6h
Time

h
clear that {ln t2 }t0 is characterised by an initial distribution equal
to p0 () and by a transition probability phs,h (, ),13 both defined on
(R, B(R)). In particular, for all B(R),
h
1. Ph (ln 02 ) = p0 ( ),
h h h
2. Ph (ln t2 | hsh ) = Ph (ln t2 | ln s2 ) = phs,h (, ) for
all t = s + h.14

In the specific case considered here, since the initial condition of


the Markov chain is known, the initial probability distribution p0 ()
is such that
1 if ln 2 = l0
2h 0
p0 (ln 0 ) = (3.26)
0 otherwise

According to Equation 3.22 for t [kh, (k + 1)h), the process


h
{ln t2 }t0 satisfies the following stochastic difference equation

ln t2+h ln t2 = 0h + (1h h) ln t2 + 21h h1 ((ln |Z|)ht ) (3.27)


h h h

Additionally, Lemma 3.8 can easily be extended to the process


h
{ln t2 }t0 .

Lemma 3.11. Let {(ln |Z|)ht }t0 be a sequence of iid random vari-
ables which constitute the innovation term of the discrete stochastic
process of Equation 3.27. Then, if {(ln |Z|)ht }t0 is defined as
 
(ln |Z|)ht = h ln |Zk | + (h h)E(ln |Zk |) (3.28)

97

i i

i i
i i

minenna 2011/9/5 13:53 page 98 #130


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

the following relations hold

E((ln |Z|)ht ) = hE(ln |Zk |) (3.29)


var((ln |Z|)ht ) = h var(ln |Zk |) (3.30)
E([(ln |Z|)ht E((ln |Z|)ht )]3 ) = h3/2 E([ln |Zk | E(ln |Zk |)]3 )
(3.31)

Proof The proof is similar to that of Lemma 3.8.

At this point it is possible to state the theorem of weak convergence


h
of the jump-continuous process {ln t2 }t0 to a diffusion process
{ln t2 }t0 characterised by a unique probability distribution.
h
Theorem 3.12. The jump-continuous Markov process {ln t2 }t0
which satisfies Equation 3.27 weakly converges for h 0 to the dif-
2
fusion process {ln t2 }t0 , which has a unique distribution Pln t and
is characterised by the following stochastic differential equation

d ln t2 = (0 + 21 E(ln |Zt |) + (1 1) ln t2 ) dt

+ 2|1 | var(ln |Zt |) dWt (3.32)
2
where {Wt }t0 is a standard Brownian motion under Pln t .15
Theorem 3.12 is a specific application of the more general theo-
rem of weak convergence of discrete Markov processes to diffusion
processes (Stroock and Varadhan 1979), and, like its general form, it
relies on the validity of four conditions.
Before examining the details of these conditions in the case of
interest, it is useful to recall that, in intuitive terms, the weak con-
vergence means that, for h 0, the sequence of probability measures
2
{Ph }h>0 converges to the probability measure Pln t of the diffusion
process {ln t2 }t0 on the measurable space (R, B(R)).16
In other words, for every T, 0  T < , the probability law
h
that generates the entire sample trajectories of {ln t2 }t0 for 0 
t  T converges to the probability law that generates the sample
trajectories of ln t2 for 0  t  T.17
The four conditions are now examined in the specific case of
h
the weak convergence of the process {ln t2 }t0 to the process
{ln t2 }t0 , the latter described by Equation 3.32.18
Condition 1 requires that the first non-central conditional moment
h
of the process {ln t2 }t0 converges for h 0 to a function which

98

i i

i i
i i

minenna 2011/9/5 13:53 page 99 #131


i i

THE SECOND PILLAR: DEGREE OF RISK

will become the first non-central conditional moment of the process


{ln t2 }t0 . A similar requirement applies for the second non-central
conditional moment. Moreover, to ensure the weak convergence it
h
is necessary, in general terms, that the process {ln t2 }t0 has a
non-central conditional moment of an order greater than 2 which
converges to 0 as h 0. As shown below, in the case considered
here, this is the third non-central conditional moment of the process
h
{ln t2 }t0 .
Condition 1. There exists a sequence of parameters {0h , 1h } such
that the following limits for h 0 hold in the sense of the uniform
convergence19
1 h h
lim E[(ln t2+h ln t2 ) | hth ] = 0 + 21 E(ln |Zt |)
h0 h
+ (1 1) ln t2 (3.33)
1 h h
lim E[(ln t2+h ln t2 )2 | hth ] = 421 var(ln |Zt |) (3.34)
h 0 h
1 h h
lim E[(ln t2+h ln t2 )3 | hth ] = 0 (3.35)
h 0 h

Proof For simplicity define


h
Aht := 0h + (1h h) ln t2 (3.36)
Bht := 21h h1 ((ln |Z|)ht ) (3.37)
so that Equation 3.27 can be expressed as
h h
ln t2+h ln t2 = Aht + Bht (3.38)
At this point the first three non-central conditional moments of
h
the process {ln t2 }t0 are determined.
For the first non-central conditional moment, by using Equa-
tion 3.38, it holds that
h h
E[(ln t2+h ln t2 ) | hth ] = E[(Aht + Bht ) | hth ]
which, since Aht is hth -measurable and Bht is independent from hth ,
becomes
h h
E[(ln t2+h ln t2 ) | hth ] = Aht + E[Bht ] (3.39)
Similarly, for the second and third non-central conditional mo-
ments, by again using Equation 3.38, it holds that
h h
E[(ln t2+h ln t2 )2 | hth ] = (Aht )2 + 2Aht E[Bht ] + E[(Bht )2 ] (3.40)

99

i i

i i
i i

minenna 2011/9/5 13:53 page 100 #132


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and
h h
E[(ln t2+h ln t2 )3 | hth ] = (Aht )3 + 3(Aht )2 E[Bht ]
+ 3Aht E[(Bht )2 ] + E[(Bht )3 ] (3.41)

By using Equation 3.39, after a little algebra, Equations 3.40


h
and 3.41 can be respectively rewritten as {ln t2 }t0
h h
E[(ln t2+h ln t2 )2 | hth ]
h h
= (E[(ln t2+h ln t2 ) | hth ])2 + var(Bht ) (3.42)

and
h h h h
E[(ln t2+h ln t2 )3 | hth ] = (E[(ln t2+h ln t2 ) | hth ])3
+ 3Aht var(Bht ) + E[(Bht )3 ] (E[Bht ])3
(3.43)
At this point, given Equation 3.39, proving Equation 3.33 is
equivalent to proving that
1 ?
lim (Aht + E[Bht ]) = 0 + 21 E(ln |Zt |) + (1 1) ln t2 (3.44)
h0 h

Substituting for Aht and Bht their expressions as given respectively


by Equations 3.36 and 3.37 and using Equation 3.29, Lemma 3.11,
Equation 3.44 becomes

1 h
lim [0h + (1h h) ln t2 + 21h E(ln |Zt |)]
h0 h
?
= 0 + 21 E(ln |Zt |) + (1 1) ln t2 (3.45)

which, by the requirement of uniform convergence, yields


1 h ?
lim [(1h h) ln t2 ] = (1 1) ln t2 (3.46)
h0 h

and
1 ?
lim [0h + 21h E(ln |Zt |)] = 0 + 21 E(ln |Zt |) (3.47)
h0 h

Similarly, given Equation 3.42, proving Equation 3.34 is equivalent


to proving that
1 h h ?
lim [(E[(ln t2+h ln t2 ) | hth ])2 + var(Bht )] = 421 var(ln |Zt |)
h0 h
(3.48)

100

i i

i i
i i

minenna 2011/9/5 13:53 page 101 #133


i i

THE SECOND PILLAR: DEGREE OF RISK

The left-hand side of Equation 3.48 can be re-expressed as


 h h 
E[(ln t2+h ln t2 ) | hth ] 2 1
lim h + lim var(Bht )
h 0 h h0 h

which, if Equation 3.33 is verified, becomes


1
lim var(Bht )
h 0 h
Substituting for Bht its expression as resulting from Equation 3.37 and
by Equation 3.30 yields
421h
lim var(ln |Zt |)
h0 h2
so that Equation 3.48 can be re-expressed as
421h ?
lim 2
var(ln |Zt |) = 421 var(ln |Zt |) (3.49)
h0 h
Similarly, given Equation 3.43, proving Equation 3.35 is equivalent
to proving that

1 h h
lim [(E[(ln t2+h ln t2 ) | hth ])3
h0 h
?
+ 3Aht var(Bht ) + E[(Bht )3 ] (E[Bht ])3 ] = 0 (3.50)

The left-hand side of Equation 3.50 can be re-expressed as


 h h 
E[(ln t2+h ln t2 ) | hth ] 3
lim h2
h0 h
1 1
+ lim 3Aht var(Bht ) + lim [E((Bht )3 ) (E[Bht ])3 ]
h0 h h0 h

which, if Equation 3.33 is verified, becomes


1 1
lim 3Aht var(Bht ) + lim [E((Bht )3 ) (E[Bht ])3 ]
h0 h h0 h

Substituting for Aht and Bht their expressions as resulting respectively


from Equations 3.36 and 3.37 and using Equations 3.30 and 3.31
yields

h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h 0 h2
 83
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] (3.51)
h0 h

101

i i

i i
i i

minenna 2011/9/5 13:53 page 102 #134


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

so that Equation 3.50 can be re-expressed as

h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h0 h2
 83 ?
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] = 0 (3.52)
h0 h
In summary, Condition 1 is met if there is a sequence of parameters
{0h , 1h } such that for h 0 the following limits are verified
1 h ?
lim [(1h h) ln t2 ] = (1 1) ln t2 (3.46)
h0 h
1 ?
lim [0h + 21h E(ln |Zt |)] = 0 + 21 E(ln |Zt |) (3.47)
h0 h
42 ?
lim 21h var(ln |Zt |) = 421 var(ln |Zt |) (3.49)
h0 h

and

h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h0 h2
 83 ?
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] = 0 (3.52)
h0 h
Setting 1h := h1 clearly satisfies Equations 3.46 and 3.49. More-
over, by setting 0h := h0 , Equation 3.47 is also satisfied. Finally,
substituting the values of 0h and 1h , as defined above, into the
left-hand side of Equation 3.52 gives

h 421h
lim 3(0h + (1h h) ln t2 ) var(ln |Zt |)
h 0 h2
 83
+ lim h 31h [E((ln |Zt |)3 ) (E(ln |Zt |))3 ]
h0 h
h 4h2 21
= lim 3(h0 + (h1 h) ln t2 ) var(ln |Zt |)
h0 h2
 8h3 3
+ lim h 3 1 [E((ln |Zt |)3 ) (E(ln |Zt |))3 ]
h0 h
and simplifying
h
lim 3(h0 + (h1 h) ln t2 )421 var(ln |Zt |)
h0

+ lim h831 [E((ln |Zt |)3 ) (E(ln |Zt |))3 ] = 0
h 0

and, thus, Equation 3.52 is also verified.

102

i i

i i
i i

minenna 2011/9/5 13:53 page 103 #135


i i

THE SECOND PILLAR: DEGREE OF RISK

Condition 2. This requires the existence of the square root of the


second non-central conditional moment of the diffusion process
{ln t2 }t0 . This condition is automatically satisfied, as the square

root of interest is equal to 2|1 | var(ln |Zt |).

Condition 3. This requires that for h 0 the initial probability of


h
the jump-continuous process {ln t2 }t0 converges in distribution
to that of the process {ln t2 }t0 . This condition is verified by con-
struction, since for t = 0 it is
h
ln 02 = ln 02 = l0

Condition 4. This requires that there exist an initial probability and


first and second non-central conditional moments that uniquely
identify the diffusion process {ln t2 }t0 . Also this condition is
automatically verified as a result of the validity of Conditions 1 and 3.

3.4.2 Distributive properties and volatility prediction intervals


The stochastic differential equation 3.32 describes an arithmetic
OrnsteinUhlenbeck diffusion process whose distributive proper-
ties are known.

Proposition 3.13. For any given constant initial condition ln s2


referring to time s, s < t, the diffusion process described by Equa-
tion 3.32 has a conditional probability distribution which follows a
normal law whose mean and standard deviation are known, ie

ln t2 | s
 
0 + 21 E(ln |Zt |)
N ln s2 + exp((1 1)(t s))
(1 1)
0 + 21 E(ln |Zt |)
;
(1 1)


(2|1 | var(ln |Zt |))2 (exp(2(1 1)(t s)) 1) 

2(1 1)
(3.53)

Proof Equation 3.32 can be rewritten as

d ln t2 = (0 + 21 E(ln |Zt |) (1 1 ) ln t2 ) dt

+ 2|1 | var(ln |Zt |) dWt (3.54)

103

i i

i i
i i

minenna 2011/9/5 13:53 page 104 #136


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and the following auxiliary process is defined


Yt = f (t, ln t2 ) = exp((1 1 )t) ln t2 (3.55)
Applying Its Lemma yields
dYt = (1 1 ) exp((1 1 )t) ln t2 dt + exp((1 1 )t) d ln t2
substituting Equation 3.54 and simplifying leads to

dYt = exp((1 1 )t)(0 + 21 E(ln |Zt |)) dt



+ exp((1 1 )t)2|1 | var(ln |Zt |) dWt (3.56)
Given the initial condition Ys = exp((1 1 )s) ln s2 ,
the stochastic
differential equation 3.56 identifies the following stochastic integral
Yt = exp((1 1 )s) ln s2
t
+ exp((1 1 )u)(0 + 21 E(ln |Zu |)) du
s
t 
+ exp((1 1 )u)2|1 | var(ln |Zu |) dWu
s
Since E(ln |Zu |) and var(ln |Zu |) are constant with respect to u
they can be taken out of the integral
Yt = exp((1 1 )s) ln s2
t
+ (0 + 21 E(ln |Zt |)) exp((1 1 )u) du
s
 t
+ 2|1 | var(ln |Zt |) exp((1 1 )u) dWu
s
Multiplying both terms by exp((1 1)t) and recalling Equa-
tion 3.55 gives
ln t2 = exp((1 1)(t s)) ln s2
exp((1 1)(t s)) 1
+ (0 + 21 E(ln |Zt |))
1 1
 t
+ 2|1 | var(ln |Zt |) exp((1 1)(t u)) dWu (3.57)
s
From Equation 3.57 it follows that the conditional probability
distribution of ln t2 is normal with mean
 
0 + 21 E(ln |Zt |)
E(ln t2 | s ) = exp((1 1)(t s)) ln s2 +
1 1
0 + 21 E(ln |Zt |)
(3.58)
1 1

104

i i

i i
i i

minenna 2011/9/5 13:53 page 105 #137


i i

THE SECOND PILLAR: DEGREE OF RISK

and variance
exp(2(1 1)(t s)) 1
var(ln t2 | s ) = 421 var(ln |Zt |) (3.59)
2(1 1)
to which the following standard deviation corresponds

SD(ln t2 | s )


(2|1 | var(ln |Zt |))2 (exp(2(1 1)(t s)) 1)
= (3.60)
2(1 1)
and, thus, Equation 3.53 is verified.

The knowledge of the distributive properties of the diffusion pro-


cess described by Equation 3.32 allows, once a confidence level
has been set, the construction of the one-day prediction interval for
the volatility, as shown in the following proposition.
Proposition 3.14. For a given confidence level , the one-day pre-
diction interval for the volatility (ie, setting s = t 1) associated with
the diffusion process of Equation 3.32 has the following lower and
upper bounds

G t,min

  
1 (2|1 | var(ln |Zt |))2 (exp(2(1 1) 1))
= exp z(1+)/2
2 2(1 1)
 
2 0 + 21 E(ln |Zt |)
+ ln t1 + exp(1 1)
1 1

0 + 21 E(ln |Zt |)

1 1
(3.61)
and

G t,max

  
1 (2|1 | var(ln |Zt |))2 (exp(2(1 1) 1))
= exp z(1+)/2
2 2(1 1)
 
0 + 21 E(ln |Zt |)
+ ln t21 + exp(1 1)
1 1

0 + 21 E(ln |Zt |)

1 1
(3.62)

105

i i

i i
i i

minenna 2011/9/5 13:53 page 106 #138


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

where the subscript G denotes that these are the bounds of the
prediction interval calculated by using the diffusion limit of the M-
Garch(1,1).

Proof The proof is straightforward from Proposition 3.13.

The values of E(ln |Zt |) and var(ln |Zt |) are deterministic func-
tions of the EulerMascheroni constant, also called Euler Gamma,
whose value is the result of the following limit (Abramowitz and
Stegun 1964).
 n  
1
lim ln n
n k
k =1
and it is approximately 0.57721.
In particular,
E(ln |Zt |) = 0.6352
and
var(ln |Zt |) = 1.2337
Hence, the following two constants can be defined
c = 2E(ln |Zt |) = E(ln Z2t )  1.2704 (3.63)
 

d = 2 var(ln |Zt |) = var(ln Z2t ) =  2.2214 (3.64)
2
which allow the next corollary to Proposition 3.14 to be stated.
Corollary 3.15. For a given confidence level , the one-day predic-
tion interval for the volatility (ie, setting s = t 1) associated with
the diffusion process of Equation 3.32 has the following lower and
upper bounds
  
1 exp(2(1 1)) 1
G t,min = exp z(1+)/2 d|1 |
2 2(1 1)
  
2 0 c 1 0 c1
+ ln t1 + exp(1 1)
(1 1) 1 1
(3.65)
and
  
1 exp(2(1 1)) 1
G t,max = exp z(1+)/2 d|1 |
2 2(1 1)
  
0 c 1 0 c1
+ ln t21 + exp(1 1)
1 1 1 1
(3.66)

106

i i

i i
i i

minenna 2011/9/5 13:53 page 107 #139


i i

THE SECOND PILLAR: DEGREE OF RISK

Once the functional form of the one-day prediction interval for the
volatility is known (ie, for the volatility that will take place on day
t), in order to determine the actual numerical value of the bounds of
this interval it is necessary to know the value of the volatility at day
s = t 1 and the estimates of the parameters 0 and 1 .

3.4.3 Estimation of the parameters


To estimate the parameters of Equation 3.32 through the data
observed in discrete time, first the following theorem must be stated,
which establishes a univocal relationship between the parameters of
the discrete process {ln k2 }kN and those of the corresponding dif-
fusion limit {ln t2 }t0 . This relation uses the first two central con-
ditional moments of the two processes by exploiting the result of
weak convergence and, in particular, the validity of Condition 1 of
Theorem 3.12. In the case of the diffusion limit of the M-Garch(1,1),
this condition, in fact, allows what happens with regard to the
conditional moments (central or not) of order greater than 2 to be
ignored.

Theorem 3.16. Let {ln k2 }kN be the discrete stochastic process


described by

(k ) (k ) (k )
ln k2+1 = 0 + 1 ln k2 + 1 ln Z2k (3.10)

or, equivalently, by

(k ) (k ) (k )
ln k2+1 = 0 + 1 ln k2 + 21 ln |Zk | (3.11)

Also let {ln t2 }t0 be the diffusion process described by

d ln t2 = (0 + 21 E(ln |Zt |) + (1 1) ln t2 ) dt

+ 2|1 | var(ln |Zt |) dWt (3.32)

Then, the parameters of these two processes are linked by the fol-
lowing relations

(k ) exp(2(1 1)) 1
1 = |1 | (3.67)
2(1 1)

107

i i

i i
i i

minenna 2011/9/5 13:53 page 108 #140


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS


(k ) exp(2(1 1)) 1
0 = 2|1 | E(ln |Zk |)
2(1 1)

exp(2(1 1)) 1
|1 | ln k2
2(1 1)
+ exp(1 1) ln k2

(0 + 21 E(ln |Zk |))(exp(1 1) 1)


+ (3.68)
1 1
Proof The proof is based on the requirement for equality between the
first two central conditional moments of the processes, {ln k2 }kN
and {ln t2 }t0 .
For the process {ln k2 }kN the conditional moments are, respec-
tively,
(k ) (k ) (k )
E(ln k2+1 | k1 ) = 0 + 1 ln k2 + 21 E(ln |Zk |) (3.69)
and
(k )
var(ln k2+1 | k1 ) = 4(1 )2 var(ln |Zk |) (3.70)
For the process {ln t2 }t0 , by Proposition 3.13, they are
 
0 + 21 E(ln |Zt+1 |)
E(ln t2+1 | t ) = ln t2 + exp(1 1)
1 1
0 + 21 E(ln |Zt+1 |)
(3.71)
1 1
and
exp(2(1 1) 1)
var(ln t2+1 | t ) = 421 var(ln |Zt+1 |) (3.72)
2(1 1)
From the above it is now possible to solve the following system
(k )
4(1 )2 var(ln |Zk |)
(exp(2(1 1) 1))
= 421 var(ln |Zt+1 |)
2(1 1)
(k ) (k ) (k )
0 + 1 ln k2 + 21 E(ln |Zk |)
(0 + 21 E(ln |Zt+1 |))(exp(1 1) 1)
= exp(1 1) ln t2 +
1 1
Since by definition ln |Zk | and ln |Zk+1 | are iid, then setting t = k, it
follows that
ln |Zk | and ln |Zt+1 | have the same distribution and, thus,
E(ln |Zk |) = E(ln |Zt+1 |) and var(ln |Zk |) = var(ln |Zt+1 |),
ln k2 = ln t2 ;

108

i i

i i
i i

minenna 2011/9/5 13:53 page 109 #141


i i

THE SECOND PILLAR: DEGREE OF RISK

so that, simplifying, the system becomes

(k ) exp(2(1 1) 1)
(1 )2 = 21
2(1 1)
(k ) (k ) (k )
0 + 1 ln k2 + 21 E(ln |Zk |)
(0 + 21 E(ln |Zk |))(exp(1 1) 1)
= exp(1 1) ln k2 +
1 1
and hence 
(k ) exp(2(1 1)) 1
1 = |1 | (3.73)
2(1 1)
It follows that

(k ) exp(2(1 1)) 1
0 + |1 | ln k2
2(1 1)

exp(2(1 1)) 1
+ 2|1 | E(ln |Zk |)
2(1 1)
= exp(1 1) ln k2
(0 + 21 E(ln |Zk |))(exp(1 1) 1)
+
1 1
(k )
which making explicit for 0 yields

(k ) exp(2(1 1)) 1
0 = 2|1 | E(ln |Zk |)
2(1 1)

exp(2(1 1)) 1
|1 | ln k2
2(1 1)
+ exp(1 1) ln k2

(0 + 21 E(ln |Zk |))(exp(1 1) 1)


+ (3.68)
1 1

Using Equations 3.67 and 3.68 it is possible to estimate the param-


eters 0 and 1 of the diffusion process {ln t2 }t0 by maximising
with respect to them the likelihood function calculated by using the
data observed in discrete time.
In particular, assuming that the number of volatility observations
used to estimate the parameters is equal to K (see Remark 3.18) the
likelihood function is determined by the next theorem.

109

i i

i i
i i

minenna 2011/9/5 13:53 page 110 #142


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Theorem 3.17. Let K be the number of observations of realised


volatility for the estimate of the parameters 0 and 1 appear-
ing in the formulas of the bounds of the prediction interval given
by Equations 3.65 and 3.66. Then, the corresponding likelihood
function is
L(Y; 0 , 1 )
1 


K
1 2(1 1)
=
k =1
2 | 1 | exp (2 (1 1)) 1
 
1 2(1 1)
exp
2|1 | exp(2(1 1)) 1

exp(1 1) 1
Yk+1 (0 c1 )
1 1

exp(2(1 1)) 1
c|1 |
2(1 1)

(exp(1 1) 1) ln k2

 
1 1 2(1 1)
exp
2 |1 | exp(2(1 1)) 1

exp(1 1) 1
Yk+1 (0 c1 )
1 1

exp(2(1 1)) 1
c|1 |
2(1 1)

(exp(1 1) 1) ln k2
(3.74)
where
Yk+1 = ln k2+1 ln k2 (3.75)

Proof Using Equations 3.67 and 3.68, Equation 3.11 can be expressed
in terms of the parameters 0 and 1 of the corresponding diffusion
limit, ie
exp(1 1) 1
ln k2+1 = (0 + 21 E(ln |Zk |))
1 1

exp(2(1 1)) 1
2|1 | E(ln |Zk |)
2(1 1)

110

i i

i i
i i

minenna 2011/9/5 13:53 page 111 #143


i i

THE SECOND PILLAR: DEGREE OF RISK


exp(2(1 1)) 1
|1 | ln k2 + exp(1 1) ln k2
2(1 1)

exp(2(1 1)) 1
+ |1 | ln k2
2(1 1)

exp(2(1 1)) 1
+ 2|1 | ln |Zk |
2(1 1)
which simplifies to

exp(1 1) 1
ln k2+1 = (0 + 21 E(ln |Zk |))
1 1

exp(2(1 1)) 1
2|1 | E(ln |Zk |)
2(1 1)
+ exp(1 1) ln k2

exp(2(1 1)) 1
+ 2|1 | ln |Zk |
2(1 1)
Expressing this in differential form yields

exp(1 1) 1
ln k2+1 ln k2 = (0 + 21 E(ln |Zk |))
1 1

exp(2(1 1)) 1
2|1 | E(ln |Zk |)
2(1 1)
+ (exp(1 1) 1) ln k2

exp(2(1 1)) 1
+ 2|1 | ln |Zk | (3.76)
2(1 1)
By
Yk+1 = ln k2+1 ln k2 (3.75)

and recalling Equation 3.63, Equation 3.76 can be written as



exp(1 1) 1 exp(2(1 1)) 1
Yk+1 = (0 c1 ) + c|1 |
1 1 2(1 1)
+ (exp(1 1) 1) ln k2

exp(2(1 1)) 1
+ 2|1 | ln |Zk | (3.77)
2(1 1)

111

i i

i i
i i

minenna 2011/9/5 13:53 page 112 #144


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Since the random variables of the sequence {ln |Zk |}kN are iid
and, for notational simplicity setting
:= g(|Zk |) = ln |Zk | (3.78)
where g(x) = ln x, Equation 3.77 becomes

exp(1 1) 1 exp(2(1 1)) 1
Yk+1 = (0 c1 ) + c|1 |
1 1 2(1 1)

exp(2(1 1)) 1
+ (exp(1 1) 1) ln k2 + 2|1 |
2(1 1)
(3.79)
By applying a well-known theorem on the functions of a random
variable, the density function of is determined to be
 
2 2 e2
f () = exp (3.80)
2 2
By applying the same theorem again, the density function of Yk+1
is also determined
fYk+1 (Yk+1 )

1 2(1 1)
=
2 |1 | exp(2(1 1)) 1
 
1 2(1 1)
exp
2|1 | exp(2(1 1)) 1

exp(1 1) 1
Yk+1 (0 c1 )
1 1

exp(2(1 1)) 1
c|1 |
2(1 1)

2
(exp(1 1) 1) ln k
 
1 1 2(1 1)
2 exp
|1 | exp(2(1 1)) 1

exp(1 1) 1
Yk+1 (0 c1 )
1 1

exp(2(1 1)) 1
c|1 |
2(1 1)

(exp(1 1) 1) ln k2 (3.81)

112

i i

i i
i i

minenna 2011/9/5 13:53 page 113 #145


i i

THE SECOND PILLAR: DEGREE OF RISK

From Equation 3.81 it immediately follows that, by using K obser-


vations of realised volatility to estimate the parameters 0 and 1 ,
the corresponding likelihood function is
1 


K
1 2(1 1)
L(Y; 0 , 1 ) =
k =1
2 |1 | exp (2 (1 1)) 1
 
1 2(1 1)
exp
2|1 | exp(2(1 1)) 1

exp(1 1) 1
Yk+1 (0 c1 )
1 1

exp(2(1 1)) 1
c|1 |
2(1 1)

(exp(1 1) 1) ln k2

 
1 1 2(1 1)
exp
2 |1 | exp(2(1 1)) 1

exp(1 1) 1
Yk+1 (0 c1 )
1 1

exp(2(1 1)) 1
c|1 |
2(1 1)

(exp(1 1) 1) ln k2
(3.74)

Remark 3.18. Since the likelihood function is the product of K 1


terms and since by Equation 3.75 each term includes two consecu-
tive volatility observations, the total number of observations for the
estimate of 0 and 1 is K.
The estimation of the parameters 0 and 1 requires the likelihood
function L(Y; 0 , 1 ) of Equation 3.74 to be maximised with respect
to them.
Since it is a rather complex function to handle, its preliminary
study helps to verify whether this optimisation problem can be
reduced to one that is equivalent but more tractable in both analytical
and numerical terms.
To this end, the following proposition is useful.

113

i i

i i
i i

minenna 2011/9/5 13:53 page 114 #146


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Proposition 3.19. The likelihood function L(Y; 0 , 1 ) of Equa-


tion 3.74 can, equivalently, be rewritten as
 
1 1 K 1
L(Y; 0 , 1 ) = exp( 21 (0 , 1 )) (3.82)
2 f (1 )
where
1 
K   
k+1 (0 , 1 ) k+1 (0 , 1 )
(0 , 1 ) = exp (3.83)
k =1
f (1 ) f (1 )

exp(1 1) 1
k+1 (0 , 1 ) = Yk+1 (0 c1 )
1 1

exp(2(1 1)) 1
c|1 |
2(1 1)
(exp(1 1) 1) ln k2 (3.84)

exp(2(1 1)) 1
f (1 ) = |1 | (3.85)
2(1 1)
Proof Substituting the left-hand side of Equation 3.84 into the right-
hand side of Equation 3.74 gives
L(Y; 0 , 1 )
1 


K
1 2(1 1)
=
k =1
2 | 1 | exp(2 (1 1)) 1
 
1 2(1 1)
exp k+1 (0 , 1 )
2|1 | exp(2(1 1)) 1
  
1 2(1 1)
12 exp k+1 (0 , 1 )
|1 | exp(2(1 1)) 1
(3.86)
By substituting the left-hand side of Equation 3.85 into the right-
hand side of Equation 3.86, after a little algebra, it follows that
L(Y; 0 , 1 )
 K 1
1
=
2 f (1 )
 K 1   
1 k+1 (0 , 1 ) k+1 (0 , 1 )
exp exp
2 k=1 f (1 ) f (1 )
(3.87)

114

i i

i i
i i

minenna 2011/9/5 13:53 page 115 #147


i i

THE SECOND PILLAR: DEGREE OF RISK

Finally, substituting the left-hand side of Equation 3.83 into the right-
hand side of Equation 3.87 yields
 
1 1 K 1
L(Y; 0 , 1 ) = exp( 12 (0 , 1 )) (3.82)
2 f (1 )

The formula 3.82 of the likelihood function can now be used to


identify another objective function, M(Y; 0 , 1 ), that has the same
optimal points of L(Y; 0 , 1 ).
In fact, Equation 3.85 implies f (1 ) > 0 and, hence, the function
L(Y; 0 , 1 ) is positive so that the following proposition holds.
Proposition 3.20. The vector of parameters (0 , 1 ) represents the
point of absolute maximum for the function L(Y; 0 , 1 ) if and only
if it is the point of absolute minimum of the function M(Y; 0 , 1 )
defined as
1
M(Y; 0 , 1 ) = [L(Y; 0 , 1 )]2/(K 1) (3.88)
2
or, equivalently, as
 
(0 , 1 )
M(Y; 0 , 1 ) = ( f (1 ))2 exp (3.89)
K1
Proof The proof is straightforward.

In order to find the point of absolute minimum of the function


M(Y; 0 , 1 ), the following procedure is performed:
(a) for each fixed 1 , find the minimum point of the function of
one variable
0 M(Y; 0 , 1 )
which will be given by a precise function () of 1 and, hence,
the following weak inequality will be satisfied
M(Y; (1 ), 1 )  M(Y; 0 , 1 ) for all 0 (3.90)

(b) consider the function of one variable


1 M(Y; (1 ), 1 )

and find numerically the point of absolute minimum 1 .


Setting 0 = (1 ), this procedure leads to
M(Y; 0 , 1 )  M(Y; 0 , 1 ) for all 0 , 1 (3.91)

115

i i

i i
i i

minenna 2011/9/5 13:53 page 116 #148


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Notation 3.21. To develop the points listed above it is appropriate


to introduce the following notation
exp(1 1) 1
(1 ) = (3.92)
1 1

Ok = ln(k2 ) (3.93)
Ok+1 exp(1 1)Ok
Ak+1 (1 ) = (3.94)
f (1 )
1 (1 )
B(1 ) = c c (3.95)
f (1 )
(1 )
C(1 ) = (3.96)
f (1 )
K 1
1 
A(1 ) = Ak+1 (1 ) (3.97)
K 1 k=1

With regard to step (a) it is now possible to state the next


proposition.

Proposition 3.22. For each fixed 1 the value of 0 which minimises


the function M(Y; 0 , 1 ) is given by the following function ()
of 1
  K 1 
1 1 
(1 ) = B(1 ) + ln exp(Ak+1 (1 )) (3.98)
C(1 ) K 1 k=1

Proof Assume that 1 is fixed. Then, by Equation 3.89, minimis-


ing M(Y; 0 , 1 ) with respect to 0 is equivalent to minimising
(0 , 1 ) with respect to the same variable, since M(Y; 0 , 1 ) is
a monotonically increasing function of (0 , 1 ).
Therefore, the following optimisation problem must be solved

min (0 , 1 ) (3.99)
0

where (0 , 1 ) is defined by
1 
K   
k+1 (0 , 1 ) k+1 (0 , 1 )
(0 , 1 ) = exp (3.83)
k =1
f (1 ) f (1 )

First, it has to be proved that the problem 3.99 admits at least one
solution.

116

i i

i i
i i

minenna 2011/9/5 13:53 page 117 #149


i i

THE SECOND PILLAR: DEGREE OF RISK

To this end, after a little algebra the following equality is obtained


k+1 (0 , 1 )
= Ak+1 (1 ) + B(1 ) 0 C(1 ) (3.100)
f (1 )
which implies that the quantity (k+1 (0 , 1 ))/( f (1 )) is an affine
function of 0 . Since in the right-hand side of Equation 3.100 0 has
a non-zero coefficient, the following limit holds
 
 k+1 (0 , 1 ) 
lim  
 = +
 (3.101)
|0 |+ f (1 )
Also observe that

lim (et t) = + (3.102)


|t|+

From Equations 3.101 and 3.102 it follows that


   
k+1 (0 , 1 ) k+1 (0 , 1 )
lim exp = +
|0 |+ f (1 ) f (1 )
which, by summing for k from 1 to K 1, leads to
1 
K   
k+1 (0 , 1 ) k+1 (0 , 1 )
lim exp = +
|0 |+
k =1
f (1 ) f (1 )

and therefore the function (0 , 1 ) is coercive with respect to the


variable 0 , ie
lim (0 , 1 ) = + (3.103)
|0 |+

As (0 , 1 ) is continuous in the variable 0 , coercivity implies that


the function 0 (0 , 1 ) admits a minimum and, thus, that the
problem 3.99 has at least one solution.
At this point the first-order condition
(0 , 1 )
=0 (3.104)
0
is studied.
Calculating the derivative of Equation 3.83 with respect to 0 gives

K 1    
(0 , 1 ) k (0 , 1 ) k (0 , 1 )
= exp
0 k =1
f (1 ) 0 f (1 )
K 1  
k (0 , 1 )
(3.105)
k =1
0 f (1 )

117

i i

i i
i i

minenna 2011/9/5 13:53 page 118 #150


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

while the derivative of Equation 3.100 with respect to the same


variable 0 is  
k+1 (0 , 1 )
= C(1 ) (3.106)
0 f (1 )
Substituting the right-hand side of Equation 3.106 into the right-
hand side of Equation 3.105 gives
K 1  
(0 , 1 ) k (0 , 1 )
= C(1 ) exp + (K 1)C(1 )
0 k =1
f (1 )
(3.107)
Substituting the right-hand side of Equation 3.107 into the left-hand
side of Equation 3.104 yields
K 1  
k (0 , 1 )
C(1 ) exp + (K 1)C(1 ) = 0
k =1
f (1 )
and hence
1
K  
k (0 , 1 )
exp = (K 1) (3.108)
k =1
f (1 )
Substituting the right-hand side of Equation 3.100 into the right-
hand side of Equation 3.108 gives
1
K
exp(Ak+1 (1 ) + B(1 ) 0 C(1 )) = K 1
k =1

Removing from the sum those terms independent of k and re-


arranging yields
K 1
1 
exp(B(1 ) + 0 C(1 )) = exp(Ak+1 (1 ))
K 1 k=1
Taking the logarithm, the previous equation becomes
 K 1 
1 
B(1 ) + 0 C(1 ) = ln exp(Ak+1 (1 )) (3.109)
K 1 k=1
It follows that Equation 3.109, and therefore also Equation 3.104,
has only one solution. Given the existence of the minimum for the
problem 3.99, this solution is the minimum point for this problem.
By expressing this solution as 0 = (1 ), it follows that
  K 1 
1 1 
(1 ) = B(1 ) + ln exp(Ak+1 (1 )) (3.98)
C(1 ) K 1 k=1

118

i i

i i
i i

minenna 2011/9/5 13:53 page 119 #151


i i

THE SECOND PILLAR: DEGREE OF RISK

Notation 3.23. Hereafter, when required for computational clarity,


Equation 3.98 will be expressed using the index j instead of the
index k.

Recalling now step (b) of the procedure described above to find


the point of absolute minimum of the M(Y; 0 , 1 ), this function is
rewritten as a function of the only variable 1 through the following
proposition.

Proposition 3.24. Assume that the parameter 0 takes the value


which minimises the function M(Y; 0 , 1 ). Then, at this value,
M(Y; 0 , 1 ) has the following functional form

K 1
e1 
M(Y; (1 ), 1 ) = ( f (1 ))2 exp(Aj+1 (1 ) A(1 ))
K 1 j=1
(3.110)
where
K 1
1 
A(1 ) = Ak+1 (1 ) (3.97)
K 1 k=1

Proof By assumption
0 (1 )

which by Equation 3.98 can be written as

  K 1 
1 1 
0 B(1 ) + ln exp(Aj+1 (1 )) (3.111)
C(1 ) K 1 j=1

Substituting the right-hand side of Equation 3.111 into the right-


hand side of Equation 3.100 and simplifying gives

 K 1 
k+1 ((1 ), 1 ) 1 
= Ak+1 (1 ) ln exp(Aj+1 (1 ))
f (1 ) K 1 j=1
(3.112)
From Equation 3.112 after a little algebra it follows that

1
K  
k+1 ((1 ), 1 )
exp =K1 (3.113)
k =1
f (1 )

119

i i

i i
i i

minenna 2011/9/5 13:53 page 120 #152


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Substituting the right-hand sides of Equations 3.112 and 3.113 into


the right-hand side of Equation 3.83 yields

((1 ), 1 )
1 
K  K 1 
1 
= (K 1) Ak+1 (1 ) ln exp(Aj+1 (1 ))
k =1
K 1 j=1
K 1  K 1 exp(A ( )) 
j=1 j+1 1
= ( K 1) Ak+1 (1 ) + (K 1) ln
k =1
K 1

factorising (K 1) the previous equation becomes

((1 ), 1 )
 K 1  K 1 
1  1 
= (K 1 ) 1 Ak+1 (1 ) + ln exp(Aj+1 (1 ))
K 1 k=1 K 1 j=1
(3.114)

Substituting the right-hand side of Equation 3.97 into the right-


hand side of Equation 3.114 and rearranging gives

  K 1 
exp(A(1 )) 
((1 ), 1 ) = (K 1) 1 + ln exp(Aj+1 (1 ))
K1 j=1
(3.115)
which can be re-expressed as

 K 1 
1 
((1 ), 1 ) = (K 1) 1 + ln exp(Aj+1 (1 ) A(1 ))
K 1 j=1
(3.116)
Finally, from Equation 3.89, Proposition 3.20 and Equation 3.116 it
follows that

M(Y; (1 ), 1 )

(K 1 )
= ( f (1 ))2 exp
(K 1)
  K 1 
1 
1 + ln exp(Aj+1 (1 ) A(1 ))
K 1 j=1
(3.117)

120

i i

i i
i i

minenna 2011/9/5 13:53 page 121 #153


i i

THE SECOND PILLAR: DEGREE OF RISK

and hence

M(Y; (1 ), 1 )
K 1
e1 
= ( f (1 ))2 exp(Aj+1 (1 ) A(1 )) (3.110)
K 1 j=1

At this point, given that e1 /(K 1) is a positive constant, it is


possible to state the following proposition.
Proposition 3.25. The value of 1 which minimises the function
expressed in Equation 3.110 is the same value which minimises the
function Q(Y; 1 ) defined as
1
K
Q(Y; 1 ) = ( f (1 ))2 exp(Ak+1 (1 ) A(1 )) (3.118)
k =1

Once the value of 1 that minimises the function 3.118 is deter-


mined numerically, the optimal value of 0 can be obtained by sub-
stituting this optimal value of 1 in Equation 3.98. In fact, in this
regard the following proposition holds.
Proposition 3.26. Let 1 be the value of the parameter 1 that min-
imises the function Q(Y; 1 ). Then, the value of 0 which minimises
the function M(Y; 0 , 1 ) is given by

1
0 = c1 + A(1 )
C(1 )
 K 1  
1 
+ ln exp(Ak+1 (1 ) A(1 )) c
K 1 k=1
(3.119)

Proof Let 1 be the value of the parameter 1 that minimises the


function Q(Y; 1 ). Substituting this value in Equation 3.98 gives
  K 1 
1 1 
0 = B(1 ) + ln exp(Ak+1 (1 )) (3.120)
C(1 ) K 1 k=1

By using Equations 3.95 and 3.96 and simplifying, it follows that


  K 1  
1 1 
0 = c1 + ln exp(Ak+1 (1 )) c (3.121)
C(1 ) K 1 k=1

121

i i

i i
i i

minenna 2011/9/5 13:53 page 122 #154


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and finally

0 = c1
  K 1  
1 1 
+ A(1 ) + ln exp(Ak+1 (1 ) A(1 )) c
C(1 ) K 1 k=1
(3.119)

In this work the K observations of annualised volatility that are


used to calculate the likelihood function are determined through
the simulation model described in Section 3.3. Each of these obser-
vations is calculated using the last = 252 daily returns of a
hypothetical product managed by an automatic asset manager.
The choice of the optimal value of K is closely related to the prop-
erties of the predictive model used and the nature of the variable to
be predicted.
In fact, an important property of the confidence intervals obtained
using the distributive properties of diffusion processes correspond-
ing to the continuous limit of the M-Garch(1,1) is that the parameters
of these prediction intervals can be estimated in a robust way using
a limited number of observations (so-called poor sigma algebras).
This property is particularly valuable in cases where it is necessary to
make predictions about consecutive time horizons because the esti-
mation of parameters, and thus also the prediction interval, imme-
diately benefits from the inclusion of more recent observations and
the abandonment of those that are further back in time (the so-called
rolling window), hence updating consistently and dynamically
the bounds of the prediction interval (so-called adaptivity).
Figure 3.5 shows a trajectory of annualised volatility associated
with the risk budget [5%, 20%] simulated by starting from the
stochastic differential equations 3.5 and 3.1 and the correspond-
ing predictive band determined by joining the H K daily predic-
tion intervals whose parameters were estimated as described in this
section.20
Figure 3.5 clearly shows the behaviour of the adaptive prediction
intervals obtained from the diffusion limit of the M-Garch(1,1).
The above mentioned properties (ie, use of poor sigma algebras
and adaptivity) have to be qualified in relation to the nature of the

122

i i

i i
i i

minenna 2011/9/5 13:53 page 123 #155


i i

THE SECOND PILLAR: DEGREE OF RISK

Figure 3.5 Volatility and its predictive band based on the diffusion limit
of the M-Garch(1,1)

Realised volatility
Upper bound of the prediction interval
Lower bound of the prediction interval

12.0

11.5
(%)

11.0

10.5
1.0 1.2 1.4 1.6 1.8 2.0
Time

financial variable to be predicted, that is, the volatility. Since the


volatility (or standard deviation) is the square root of the second
central moment of a random variable, its forecast through the dif-
fusion limit of the M-Garch(1,1) requires a number of observations
K higher than the number of observations needed to obtain robust
estimates of a return (ie, of a mean value) (Minenna 2003). How-
ever, reliable predictions with high confidence can be obtained for
values of K much smaller than the width of the time window for
observing the returns to be used for the calculation of a realisation
of annualised volatility.
To find the value of K that provides the best balance between the
level of confidence associated with the parameters estimation and
the use of a relatively limited data set (hence allowing benefit to
be gained from the above-mentioned properties of diffusive Garch
models) specific numerical tests were performed.
In particular, with = 252 it is reasonable to set a maximum
admissible value of K equal to half of (ie, 126) and a minimum
admissible value of K equal to 21, the latter determined in consid-
eration of the above-described characteristics of the variable to be
predicted.

123

i i

i i
i i

minenna 2011/9/5 13:53 page 124 #156


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

By varying K between 21 and 126 in steps of length 21 (ie, each


step is around one month in length), the numerical tests have
shown that for standard confidence levels (above 90%), the value
of K that is most compatible with any given confidence level is 63,
corresponding to three months of observations of volatility.21

3.5 MANAGEMENT FAILURES AND THE OPTIMAL GRID


The calibration of the optimal set of volatility intervals takes place
through the definition and resolution of a complex stochastic non-
linear programming problem (ksendal 2010) that depends on the
n-dimensional vector of the management failures associated with an
equal number of volatility intervals which varies with n.
The complexity of the problem arises from the fact that, as dis-
cussed in Section 3.1, the optimal grid must ensure that all volatility
intervals meet simultaneously and substantially to the same extent
the requirement of market feasibility.
In other words, in the optimal grid the number of intervals and
their extremes are such that
1. for all intervals an excessive (abnormal) number of manage-
ment failures can be ruled out,
2. except for negligible differences, the number of management
failures is the same (homogeneous) for all intervals.
These two conditions characterise the problem of interest as a
vectorial problem of Pareto optimum: the close interdependence
between the intervals of any grid (which are contiguous) indicates
that it is not possible to determine the optimal grid by examining
each interval separately.
The two requirements listed above must be interpreted bearing
this point in mind.
The optimal grid must represent a set of risk budgets such that
the impact of the management failures is under control for all
intervals and basically in the same way for all, in the sense that
abrupt and significant changes in market conditions determine the
same number of outliers for each interval.
It is worth noting that both requirements, besides being reason-
able, are also desirable.
For example, imagine a grid of only two intervals, with an inci-
dence of management failures equal to 90% in the first interval and

124

i i

i i
i i

minenna 2011/9/5 13:53 page 125 #157


i i

THE SECOND PILLAR: DEGREE OF RISK

4% in the second one. This situation indicates that the widths of the
two intervals are not properly calibrated.
The numbers in this example signal that (for reasons which will
become clear later) the width of the first interval is excessive, while
the second interval is too narrow. Recalling from Section 3.2 that the
model for the automatic asset manager assumes the use of the whole
predetermined risk budget, it is clear that the player who operates on
the first interval has at their disposal a boundless scope in deciding
inter-temporal asset allocation, while for the player who operates
on the second interval the space of possible management choices
may be too limited. In the real world, the effect of this imbalance
between the two intervals is likely to be a systematic selection of the
first interval and the consequent disappearance of the second one.
The example above shows the fundamental aspect of the concept
of market feasibility. Even if, for ease of illustration, so far the mar-
ket feasibility has been often presented as a property of the single
volatility interval, in reality it is a requirement that becomes mean-
ingful (and, therefore, must be verified) for each interval only in
comparison with the whole set of volatility intervals in a grid.
An incidence of management failures equal to 4% may seem low
enough to be compatible with the market feasibility of the second
interval, but it cannot be assessed independently. And, in fact, a
comparison with that of the first interval shows that, overall, the grid
does not realise a market-feasible partition of the space of possible
volatilities.
This aids a better understanding of why in the optimal grid the
management failures must be more or less the same for all intervals.
In fact, if the intervals of a grid had a different number of man-
agement failures, the grid would be not optimal, even if each single
interval was market feasible.
The modelling assumptions on the evolution of the volatility of a
hypothetical product managed by an automatic asset manager imply
that, in the simulation of the volatility realised by products with
different risk budgets, the market shocks have identical arrival times
and are filtered according to these different risk budgets.
Consequently, different incidences of management failures across
these products (even though all failures are apparently contained)
show a different sensitivity by the corresponding risk budgets to
changes in market conditions: some will register an overreaction

125

i i

i i
i i

minenna 2011/9/5 13:53 page 126 #158


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

leading to management failures even in the face of mild shocks;


some, on the other hand, will be too stable, yet others will properly
discriminate between shocks which have little significance and those
which, because of their size, actually cause an unexpected behaviour
of the volatility.
In addition, as is well known, in conjunction with the most sig-
nificant shocks the correlations between various financial variables
typically increase; in the case of interest it is therefore reasonable
to expect that if a grid of volatility intervals is suitably calibrated,
the management failures will occur for all intervals due to the same
shocks.
From the above, it is clear that it does not make sense to determine
a priori the maximum acceptable number of management failures for
a single interval without including others in the analysis. It is exactly
when switching to the study of the problem in a vector dimension
that it becomes clear that the market feasibility for the entire set of
intervals can be achieved only if this maximum is the same for all
intervals.

3.5.1 Definition of management failures and introduction to


the calibration problem
A management failure occurs when a hypothetical product managed
by an automatic asset manager who operates to meet a given risk
budget (ie, a specific volatility interval) produces a volatility that is
not in line with the expectations of the market.
Assuming that the market formulates its expectations on future
volatility by using the predictive model of Section 3.4 and based on
the diffusion limit of the M-Garch(1,1), at every day k, information
contained in a limited number of observations of realised volatility is
used to determine a prediction interval of the value that this financial
variable can assume with a reasonable confidence at the next day,
k + 1.
If the volatility realised at k + 1 does not fall within this prediction
interval, then a management failure will occur.
Formally, the following definitions are given.
Definition 3.27. Given the space of possible volatilities [0, +[, the
(n)
set of partitions of size n, n  2, denoted by [0,+[ is defined as

[0,+[ = {(1 , . . . , n1 ) Rn1 : 0 < 1 < < n1 < +}


(n)

(3.122)

126

i i

i i
i i

minenna 2011/9/5 13:53 page 127 #159


i i

THE SECOND PILLAR: DEGREE OF RISK

(n )
where each element (1 , . . . , n1 ) [0,+[ partitions [0, +[ into
n risk budgets, that is, into n consecutive volatility intervals, ie

[0, 1 ], [1 , 2 ], . . . , [n1 , +[ (3.123)


(n)
Definition 3.28. Given an element (1 , . . . , n1 ) [0,+] , a man-
(i, j)
agement failure, denoted by mf k+1 , is said to occur at day k + 1 with
respect to the jth interval ( j = 1, 2, . . . , n) and over the ith trajectory
(i = 1, 2, . . . , m), if one of the following two inequalities holds
(i, j) (i, j)
k+1 <G k+1,min (3.124)
(i, j) (i, j)
k+1 >G k+1,max (3.125)

where
(i, j)
k+1 is the volatility realised at day k + 1,
(i, j) (i, j)
G k +1,minand G k+1,max are the bounds of the prediction
interval for the volatility at k + 1.

The value of the volatility realised at k + 1 is determined by apply-


ing Equation 3.8 to the latest daily returns of the product observed
along the ith trajectory (i = 1, 2, . . . , m), while the bounds of the
prediction interval for the volatility at k + 1 are determined using
Equations 3.65 and 3.66, in which the parameters 0 and 1 are
estimated by maximising the likelihood function in 3.74 built with
the last K observations of realised volatility along the ith trajectory
(i = 1, 2, . . . , m), ie
(i, j) (i, j) (i, j)
{k+1K , k+1(K 1) , . . . , k }

Given that any trajectory of realised volatility is composed of H =


N + 1 observations and, as mentioned above, any forecast of the
volatility at k + 1 requires as input the last K values of volatility, the
total number of predictions that can be made (and, therefore, also the
maximum possible number of management failures along a given
path) is equal to M = H K = N + 1 K.
More generally, the following propositions hold.
(n)
Proposition 3.29. Given an element (1 , . . . , n1 ) [0,+[ , the
maximum number of management failures which could occur in
correspondence of each volatility interval is mM or, equivalently,
m(H K ) or, equivalently, m(N + 1 K ).

127

i i

i i
i i

minenna 2011/9/5 13:53 page 128 #160


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

(n)
Proposition 3.30. Given an element (1 , . . . , n1 ) [0,+[ , the
total number of management failures occurring in correspondence
of each volatility interval is denoted by mf ( j) , j = 1, . . . , n, and is
equal to

m 
N
mf ( j) = 1 { (i, j) < (i, j)  (i,) (i, j) (3.126)
k +1 G k +1,min } {k +1 >G k +1,max }
i=1 k +1=+K

(n)
Proposition 3.31. Given an element (1 , . . . , n1 ) [0,+[ , the
percentage of management failures occurring in correspondence of
( j)
each volatility interval is denoted by mf , j = 1, . . . , n, and is equal
to
( j) mf ( j)
mf = (3.127)
mM
The calibration of the intervals into which to divide a given space
of possible volatilities requires the solution of a stochastic non-linear
programming problem that depends both on the number of intervals
and on the extremes that identify them. In fact, the market feasibility
must be sought simultaneously for an entire n-tuple of risk budgets
in the perspective of finding that value of n and that vector of volatil-
ities which meet the requirements set forth in points (1) and (2) of
Section 3.5, namely the exclusion of an abnormal number of man-
agement failures for each interval and the substantial equality in the
number of failures for all intervals.
At first glance, it seems that the two conditions require the solu-
tion of two different problems: the first is an optimisation problem
that requires minimising a suitable monotonically increasing func-
tion of the management failures of all intervals, while the second
is a problem of constrained nature which requires finding the grid
that, except for some marginal differences, equals the number of
management failures for all volatility intervals within it.
However, the close interconnection and equivalence of the two
problems will be shown. This can be anticipated in the light of the
arguments set out in Section 3.5.
For any n, since the intervals dividing the space of the possible
volatilities are consecutive, they all have one extreme in common.
As the management failures of any interval are directly proportional
to its width (as explained in Section 3.5.2), attention must be paid to
the fact that the revision of an interval is reflected, all things being
equal, on its two neighbours.22

128

i i

i i
i i

minenna 2011/9/5 13:53 page 129 #161


i i

THE SECOND PILLAR: DEGREE OF RISK

In other words, when moving to the contextual analysis of all


intervals, it emerges that the reduction (enlargement) of a specific
interval implies the enlargement (reduction) of other intervals result-
ing in converse changes in the incidence of management failures
associated with them.
This aspect leads to the exclusion of trivial configurations (eg,
excessive reduction of an interval) as they may be incompatible with
the overall market feasibility of the grid. At the same time it ensures
that, in accordance with the constraint of increasing absolute widths,
the risk budgets of automatic asset managers active in any of the
intervals are properly balanced in the sense that, in the transition
from the theoretical model of Section 3.2 to the actual reality of the
markets, it is possible to review the composition of the portfolio
under management remaining compliant with the stated budgets.
Focusing on the first requirement for the optimal grid, the need
that the market feasibility is simultaneously satisfied for all the
volatility intervals, suggests the objective function of the optimi-
sation problem be defined in a prudential way. In this approach the
solution will search through all possible sets of partitions as n varies
(n )
and, for any n, as (1 , . . . , n1 ) varies in [0,+[ , those that minimise
the percentage of management failures that occur in the interval that
has the highest incidence of failure.
The choice of an objective function that minimises the manage-
ment failures associated with the worst volatility interval in
terms of market feasibility is explained by the fact that the minimi-
sation of the number of outliers of this interval indirectly imposes an
upper limit on the number of management failures that can occur in
all other intervals. Thus, if the solution of the optimisation problem
is compatible with the market feasibility of the worst interval, this
solution will automatically be compatible with the market feasibility
of all other volatility intervals.
Formally, the problem described above can be defined as follows.

Definition 3.32. Let


 (n )
[0,+[ = [0,+[ (3.128)
n2

be the set of all possible partitions of the volatility space [0, +[ as n


varies. Then, a grid of volatility intervals is said to be not abnormal

129

i i

i i
i i

minenna 2011/9/5 13:53 page 130 #162


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

if it solves the following optimisation problem


( j)
min max (mf ) (3.129)
(1 ,...,n1 )[0,+[ j=1,...,n

The problem set by Equation 3.129 shows that the number n of


the intervals is one of the variables to be optimised. While the reader
is referred to subsequent sections for more details on this subject,
here it is sufficient to point out that, in intuitive terms, the opti-
mal n will have to contribute to the minimisation of the objective
function defined above and, at the same time, comply with certain
requirements arising from the ultimate purpose of the calibration
problem, ie, the mapping of a given number of volatility intervals
into a corresponding qualitative scale aimed at providing an objec-
tive and easily understandable representation of the level of risk of
a non-equity product.
It follows that the presence of an exogenous constraint in terms of
minimum and maximum values admissible for n is intrinsic to the
nature of the problem. In fact, beyond certain thresholds, the smaller
(greater) amount of informative detail becomes insufficient (exces-
sive) and hinders potential investors from an effective comparison
and distinction between the degrees of risk of different products.
The second condition required for the optimal grid, is formalised
by the following simple definition.
Definition 3.33. A grid of volatility intervals belonging to [0,+[ is
said to be homogeneous if its intervals meet the following constraint
( j1 ) ( j2 )
mf  mf for all j1 j2 (3.130)

The various steps for the calibration of the optimal grid are given
in the following sections.
To this end, some basic results are presented in Section 3.5.2. These
results are then used to identify a non-abnormal homogeneous grid
on a reduced space of volatilities which is strictly contained in the
space [0, +[ (Section 3.5.3). Finally, Section 3.5.4 returns to the
entire space of possible volatilities, and shows how, in order for it to
admit a grid with the same properties as that obtained on the reduced
space, the first and the last interval should be determined accord-
ing to criteria independent of any consideration on the incidence
of management failures. Once these two intervals have been suit-
ably determined, the identification of the grid will be closely related

130

i i

i i
i i

minenna 2011/9/5 13:53 page 131 #163


i i

THE SECOND PILLAR: DEGREE OF RISK

to the optimal number of intervals in which to divide the space of


volatilities. Such a number will have to be chosen by balancing the
need to minimise the management failures for all intermediate inter-
vals and the aim to provide a useful and understandable disclosure
to investors interested in knowing the degree of risk of different
products.

3.5.2 Relation between relative widths and management


failures
The number of management failures associated with a given volatil-
ity interval depends on its width.
To understand this basic relation, the relative width of the
volatility intervals, as defined below, has to be considered.
Definition 3.34. The relative width of the volatility interval [A , B ],
with 0 < A < B < +, is denoted by [A ,B ] and corresponds to
the following quantity
B
[A ,B ] = (3.131)
A
Definition 3.34 allows an important result to be stated about the
relation of proportionality between the volatility realised by auto-
matic asset managers that operate on two volatility intervals with
the same relative width.
Indeed, the following lemma holds.
Lemma 3.35. Let [A , B ], with 0 < A < B < +, and [C , D ],
with 0 < C < D < +, be two volatility intervals with the same
relative width, ie
[A ,B ] = [C ,D ] (3.132)

Then there exists > 0 such that


(i,[C ,D ]) (i,[A ,B ])
k  k , k = , . . . , N; i = 1, 2, . . . , m (3.133)

where
C D
= = (3.134)
A B

Proof By assumption, the intervals [A , B ] and [C , D ] have the


same relative width; hence, by Equation 3.131, there exists > 0
such that
C D
= = (3.134)
A B

131

i i

i i
i i

minenna 2011/9/5 13:53 page 132 #164


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

or, equivalently
C = A
(3.135)
D = B

Let [A ,B ] , [A ,B ] and v[A ,B ] be the parameters of the stochastic


differential equation 3.1, which refers to the automatic asset manager
that operates on the interval [A , B ]. The process of the variance
[ , ]
{(t A B )2 }t0 is therefore described by the following stochastic
differential equation
[A ,B ] 2 [A ,B ] 2
d(t ) = [A ,B ] ( [A ,B ] (t ) ) dt
[A ,B ]
+ v[A ,B ] t dW2,t (3.136)

Similarly, let [C ,D ] , [C ,D ] and v[C ,D ] be the parameters of


the same stochastic differential equation referring to the automatic
asset manager that operates on the interval [C , D ]. The process
[ , ]
of the variance {(t C D )2 }t0 is then described by the following
stochastic differential equation
[C ,D ] 2 [C ,D ] 2
d(t ) = [C ,D ] ( [C ,D ] (t ) ) dt
[C ,D ] [C ,D ]
+v t dW2,t (3.137)

First, it is necessary to find the relations between the parame-


ters [A ,B ] , [A ,B ] , v[A ,B ] and the parameters [C ,D ] , [C ,D ] ,
v[C ,D ] .
From Equation 3.3, it follows that

[C ,D ] = [A ,B ] (3.138)

and from Equations 3.2 and 3.135 it follows that

[C ,D ] = 2 [A ,B ] (3.139)

Consider now the stochastic process {Xt }t0 defined as


[A ,B ]
Xt = t (3.140)

Since is constant, from Its Lemma and Equation 3.136 it follows


that
[A ,B ] 2
dXt2 = [A ,B ] ( 2 [A ,B ] 2 (t ) ) dt
[A ,B ]
+ 2 v[A ,B ] t dW2,t (3.141)

132

i i

i i
i i

minenna 2011/9/5 13:53 page 133 #165


i i

THE SECOND PILLAR: DEGREE OF RISK

Substituting Equations 3.1383.140 into Equation 3.141 gives

dXt2 = [C ,D ] [ [C ,D ] Xt2 ] dt + v[A ,B ] Xt dW2,t (3.142)

It is now useful to recall that the calibration of v[A ,B ] provided


by Equation 3.4, is such that the probability mass of the stationary
[ , ]
distribution of (t A B )2 which lies outside the interval [A2 , B2 ] is
equal to 1%.
From Equation 3.140 it follows that the probability mass of the
[ , ]
stationary distribution of Xt2 = 2 (t A B )2 which lies outside the
interval [ 2 A2 , 2 B2 ] is also equal to 1%.
This equality and the fact that by Equation 3.135 it holds that

[C2 , D2 ] = [ 2 A2 , 2 B2 ]

lead to the following equality


 
[C ,D ] ( v[A ,B ] )2
F C2 ; [C ,D ] ,
2 [C ,D ]
  
[C ,D ] ( v[A ,B ] )2
+ 1 F D2 ; [C ,D ] , = 1% (3.143)
2 [C ,D ]
By definition (see Section 3.2) the value of the parameter v[C ,D ]
appearing in the right-hand side of Equation 3.137 must satisfy
Equation 3.4, meaning that the following equality has to be verified
 
[C ,D ] (v[C ,D ] )2
F C2 ; [C ,D ] ,
2 [C ,D ]
  
[C ,D ] (v[C ,D ] )2
+ 1 F D2 ; [C ,D ] , = 1% (3.144)
2 [C ,D ]
From Equations 3.143 and 3.144, recalling that the parameter
[C ,D ]
v is unique, it follows that

v[C ,D ] = v[A ,B ] (3.145)

Substituting the above equality into Equation 3.142 gives

dXt2 = [C ,D ] [ [C ,D ] Xt2 ] dt + v[C ,D ] Xt dW2,t (3.146)

The comparison between Equations 3.137 and 3.146 emphasises


[ , ]
that the processes Xt2 and (t C D )2 coincide, and hence, by
Equation 3.140, it follows that
[C ,D ] [A ,B ]
t = t (3.147)

133

i i

i i
i i

minenna 2011/9/5 13:53 page 134 #166


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Recalling the stochastic differential equation 3.5, the time evolu-


tion of the value of products managed by the automatic asset man-
agers with the risk budgets, identified, respectively, by the intervals
[A , B ] and [C , D ], can be described by
[A ,B ] [A ,B ] [A ,B ] [A ,B ]
dSt = rSt dt + t St dW1,t (3.148)
[C ,D ] [C ,D ] [C ,D ] [C ,D ]
dSt = rSt dt + t St dW1,t (3.149)

Substituting Equation 3.147 into Equation 3.149 yields


[C ,D ] [C ,D ] [A ,B ] [C ,D ]
dSt = rSt dt + t St dW1,t (3.150)

With regard to a given trajectory i, the approximations through the


Euler scheme (Glasserman 2004) of Equations 3.148 and 3.150 with
respect to the time instants tk defined in Equation 3.6, are given,
respectively, by
(i,[ , ]) (i,[ , ]) (i,[ , ]) (i,[ , ]) (i,[ , ])

Stk A B Stk At B  rStk At B t + tk tA B Stk At B ttk

and
(i,[C ,D ]) (i,[ ,D ]) (i,[ ,D ]) (i,[ ,B ]) (i,[C ,D ])

S tk Stk Ct  rStk Ct t +tk tA Stk t ttk

where {tk }Nk =1 is a sequence of iid standard normal random vari-


ables.
A little algebra leads to
(i,[A ,B ])
Stk
(i,[ ,B ])
Stk At (i,[ ,B ])

(i,[ , ])
 rt + tk tA ttk (3.151)
Stk At B
(i,[C ,D ])
Stk
(i,[ ,D ])
Stk Ct (i,[ ,B ])

(i,[ , ])
 rt + tk tA ttk (3.152)
Stk Ct D
Approximating the logarithmic daily returns of Equation 3.7 with
the arithmetic returns and suitably substituting gives
(i,[ , ]) (i,[ , ])

Rtk A B  rt + tk tA B ttk (3.153)
(i,[C ,D ]) (i,[A ,B ])

Rtk  rt + tk t ttk (3.154)

In order to calculate the annualised volatility of the daily returns


according to Equation 3.8, the additive constants are irrelevant. Thus,
without any loss of generality, it can be assumed that
(i,[ , ]) (i,[ , ])

Rtk A B  tk tA B ttk

134

i i

i i
i i

minenna 2011/9/5 13:53 page 135 #167


i i

THE SECOND PILLAR: DEGREE OF RISK

(i,[C ,D ]) (i,[ ,B ])

R tk  tk tA ttk

and, hence, also


(i,[C ,D ]) (i,[A ,B ])
R tk  Rt k (3.155)

Recalling Equations 3.8 and 3.9, the thesis is proved, ie


(i,[C ,D ]) (i,[A ,B ])
k  k , k = , . . . , N; i = 1, 2, . . . , m (3.133)

Lemma 3.35 allows the statement of a theorem that establishes a


relation of equality (except for negligible differences) between the
management failures associated with two volatility intervals that
have the same relative width.
This is because the relative width has a similar influence on both
the pattern of the volatility realised by the product via simulation
and the distributive characteristics of the diffusion limit of the M-
Garch(1,1).
Having the same relative width, the difference between the two
volatility intervals, or, equivalently, between the two risk budgets,
becomes just a scale factor which affects the magnitude of both the
realised volatilities and predicted ones but not their behaviour or
the incidence of management failures.
Theorem 3.36. Let [A , B ] with 0 < A < B < +, and [C , D ]
with 0 < C < D < +, be two volatility intervals with the same
relative width. Then, the two intervals are related as follows
([A ,B ]) ([C ,D ])
mf  mf (3.156)

Proof By assumption it holds that

[A ,B ] = [C ,D ] (3.132)

and, therefore, the following expression holds


(i,[C ,D ]) (i,[A ,B ])
k  k , k = , . . . , N; i = 1, 2, . . . , m (3.133)

For simplicity, the following notation is introduced for all k =


, . . . , N and all i = 1, 2, . . . , m
(,[C ,D ]) ()
k = k (3.157)
(,[A ,B ])
k = k (3.158)

135

i i

i i
i i

minenna 2011/9/5 13:53 page 136 #168


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

so Equation 3.133 can be re-expressed as


()
k  k (3.159)

Moreover
Ok , Ak+1 (1 ), A(1 ) and Q(Y; 1 ) are used to denote the quan-
tities expressed respectively by Equations 3.93, 3.94, 3.97 and
by Equation 3.118 referring to the risk budget [A , B ],
Equation 3.134 implies that

[C , D ] = [A , B ] (3.160)

so that:

by analogy to Notation 3.21, the following notation is


introduced
() () 2
Ok = ln((k ) ) (3.161)
() ()
() Ok+1 exp(1 1)Ok
Ak+1 (1 ) = (3.162)
f (1 )
K 1
1  ()
A() (1 ) = A (1 ) (3.163)
K 1 k=1 k+1

the associated likelihood function is denoted by

L() (Y; 0 , 1 )

by analogy with Equation 3.88 the following function is


defined
1
M() (Y; 0 , 1 ) = [L() (Y; 0 , 1 )]2/(K 1) (3.164)
2
by analogy with Equation 3.118 the following function is
defined
1
K
()
Q() (Y; 1 ) = ( f (1 ))2 exp(Ak+1 (1 ) A() (1 ))
k =1
(3.165)

Substituting the right-hand side of Equation 3.159 into the right-


hand side of Equation 3.161 and using Equation 3.93 gives
()
Ok  ln( 2 ) + Ok (3.166)

136

i i

i i
i i

minenna 2011/9/5 13:53 page 137 #169


i i

THE SECOND PILLAR: DEGREE OF RISK

Substituting the right-hand side of Equation 3.166 into the right-


hand side of Equation 3.162 and using Equation 3.94 gives

() 1 exp(1 1)
Ak+1 (1 )  Ak+1 (1 ) + ln( 2 ) (3.167)
f (1 )
Substituting the right-hand side of Equation 3.167 into the right-
hand side of Equation 3.163 and applying Equation 3.97 leads to
(1 exp(1 1))
A() (1 )  A(1 ) + ln( 2 ) (3.168)
f (1 )
In addition, from Equations 3.167 and 3.168 it follows that
()
Ak+1 (1 ) A() (1 )  Ak+1 (1 ) A(1 ) (3.169)

At this point observe that, by Equation 3.169, it holds that

Q() (Y; 1 )  Q(Y; 1 )

which clearly implies that, except for marginal differences, the point
()
of minimum 1 of the function Q() (Y; 1 ) coincides with the point
of minimum 1 of the function Q(Y; 1 ), ie
()
1  1 (3.170)
()
Similarly to Equation 3.119, the point of minimum 0 of the
function M() (Y; 0 , 1 ) is defined by
() () 1
0 = c1 + ()
C(1 )

()
A() (1 )
 K 1  
1  () () ()
+ ln exp(Ak+1 (1 ) A() (1 )) c
K 1 k=1
(3.171)
which, by exploiting Equations 3.168, 3.169 and 3.170, becomes
() 1
0  c1 +
C(1 )
  K 1  
1 
A(1 ) + ln exp(Ak+1 (1 ) A(1 )) c
K 1 k=1
1 1 exp(1 1)
+ ln( 2 )
C(1 ) f (1 )

137

i i

i i
i i

minenna 2011/9/5 13:53 page 138 #170


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

where, by Equation 3.119, the first two terms of the right-hand side
correspond to 0 and, thus
() 1 1 exp(1 1)
0  0 + ln( 2 ) (3.172)
C(1 ) f (1 )
which, recalling Equations 3.92 and 3.96, becomes
()
0  0 + (1 1 ) ln( 2 ) (3.173)
By making the appropriate substitutions in Equation 3.53, it
follows that
with regard to the risk budget [A , B ]
ln k2+1 | k N ( ; SD) (3.174)
where
= exp(1 1) ln k2 + (1 )(0 c1 ) (3.175)
SD = f (1 )d (3.176)

with regard to the risk budget [C , D ]


()
ln(k+1 )2 | k N ( () ; SD() ) (3.177)
where
() () 2 () () ()
() = exp((1 1)) ln(k ) + (1 )(0 c1 )
(3.178)
()
SD() = f (1 )d (3.179)

By Equations 3.170, 3.173, 3.92 and 3.159, and suitably arranging


and factorising, Equations 3.178 and 3.179 become, respectively
()  exp(1 1) ln k2 + (1 )(0 c1 ) + ln 2 (3.180)
()
SD  f (1 )d (3.181)
The comparison between the pair 3.175, 3.176 and the pair 3.180,
()
3.181 shows that ln k2+1 and ln(k+1 )2 have a normal conditional
probability distribution with essentially the same standard deviation
and the same mean, the latter with the exception of the constant ln 2 .
Moreover, by Equation 3.133, the realised volatilities associated
with the interval [C , D ] are approximately times those associ-
ated with the interval [A , B ], so that for all k + 1 = + K, . . . , N
and all i = 1, 2, . . . , m, the following relation holds
(i,[C ,D ]) 2 (i,[A ,B ]) 2
ln(k+1 )  ln(k+1 )

138

i i

i i
i i

minenna 2011/9/5 13:53 page 139 #171


i i

THE SECOND PILLAR: DEGREE OF RISK

that is, by using a simplified notation (without the index of the


trajectory)
[ ,D ] 2 [ ,B ] 2
ln(k+1C )  ln 2 + ln(k+1A ) (3.182)

From Equation 3.182 it emerges that, except for the constant


[ , ] [ , ]
ln 2 , the realisations of ln(k+1A B )2 and those of ln(k+1C D )2 are
essentially the same.
Given that, apart from for minimal differences, the difference
between the means of the conditional probability distributions of
()
ln k2+1 and ln(k+1 )2 corresponds to this constant, and that the first
two central moments of these distributions completely define the
prediction interval for the calculation of the management failures,
the thesis follows, ie it holds that
([A ,B ]) ([C ,D ])
mf  mf (3.156)

On the other hand, in the case of two volatility intervals char-


acterised by different relative widths, the following proposition
holds.
Proposition 3.37. Let [A , B ], with 0 < A < B < +, and
[C , D ], with 0 < C < D < +, be two volatility intervals such
that
[A ,B ] < [C ,D ]

Then, the following relation holds


([A ,B ]) ([C ,D ])
mf < mf (3.183)

Proof The critical arguments for the proof are contained in the proof
of Theorem 3.36.

3.5.3 The optimal grid on the reduced space of volatilities


[0 , n ]
The results presented in Section 3.5.2 allow the investigation of
the solution to an optimisation problem and to a problem of con-
strained nature that are similar to those expressed by Definitions 3.32
and 3.33, respectively, but applied to a reduced space of volatilities,
intended as a space which is strictly contained in the full space of the
possible volatilities [0, +[, and in which the number n of intervals
is fixed.

139

i i

i i
i i

minenna 2011/9/5 13:53 page 140 #172


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

First of all, the following definition is given.


Definition 3.38. Given the space of the possible volatilities [0 , n ],
where 0 < 0 < n < +, the set of partitions of size n, n  2,
(n )
denoted by [0 ,n ] , is defined as

[0 ,n ] = {(1 , . . . , n1 ) Rn1 : 0 < 1 < < n1 < n }


(n)

(3.184)
(n )
where each element (1 , . . . , n1 ) [0 ,n ] partitions [0 , n ] into
n risk budgets, that is, into n consecutive volatility intervals, ie

[0 , 1 ], [1 , 2 ], . . . , [n1 , n ] (3.185)

By using Definition 3.38 the optimisation problem expressed by


Definition 3.32 is re-stated as follows.
Definition 3.39. Given the space of the possible volatilities [0 , n ]
(n )
and a set [0 ,n ] of its partitions of size n, a grid of volatility intervals
associated with this space is said to be not abnormal if it solves
the following optimisation problem
( j)
min max (mf ) (3.186)
(n)
(1 ,...,n1 )[ ,n ] j=1,...,n
0

Also the problem of constrained nature expressed by Defini-


tion 3.33 can be re-stated on the reduced space as follows.
(n)
Definition 3.40. A grid of volatility intervals belonging to [0 ,n ] is
said to be homogeneous if its intervals meet the following constraint
( j1 ) ( j2 )
mf  mf for all j1 j2 (3.187)

To solve the problems 3.186 and 3.187 an important result on the


relative widths of all possible intervals that compose each element
(n)
of [0 ,n ] is first given.
Indeed, the following lemma holds.
Lemma 3.41. Given the space of possible volatilities [0 , n ], let
(0 , 1 , 2 , . . . , n1 , n ) be a vector of increasing volatilities. More-
over, after setting
 1/n
n
= (3.188)
0
let
() () () ()
(0 = 0 , 1 , 2 , . . . , n1 , n() = n ) (3.189)

140

i i

i i
i i

minenna 2011/9/5 13:53 page 141 #173


i i

THE SECOND PILLAR: DEGREE OF RISK

be another vector of increasing volatilities such that


()
l = l 0 for all l = 0, 1, . . . , n (3.190)

Then, it can be seen that


() ()
max [l , l +1 ]
 max [l ,l+1 ] (3.191)
l=0,1,...,n1 l=0,1,...,n1

and the equality is verified if and only if


()
l = l for all l = 0, 1, . . . , n (3.192)

Proof (i) From Equations 3.190 and 3.131 it follows that


() ()
[l ,l+1 ]
= for all l = 0, 1, . . . , n 1 (3.193)

which implies
() ()
max [l ,l+1 ]
= (3.194)
l=0,1,...,n1

On the other hand, for all b = 0, 1, . . . , n 1 it holds that

[b ,b+1 ]  max [l ,l+1 ] (3.195)


l=0,1,...,n1

which leads to
1
n  n
[b ,b+1 ]  max [l ,l+1 ] (3.196)
l=0,1,...,n1
b=0

Observing now that, again by Equation 3.131


1
n 1
n
l+1 1 2 n
[l ,l+1 ] = = ...
l=0 l=0
l 0 1 n1

and, simplifying, it follows that


1
n
n
[l ,l+1 ] = (3.197)
l=0
0

By exploiting Equation 3.188, Equation 3.197 can be re-expressed as


1
n
[l ,l+1 ] = n (3.198)
l=0

Substituting into the left-hand side of Equation 3.196 the right-


hand side of Equation 3.198 gives
 n
n  max [l ,l+1 ]
l=0,1,...,n1

141

i i

i i
i i

minenna 2011/9/5 13:53 page 142 #174


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

which implies
 max [l ,l+1 ] (3.199)
l=0,1,...,n1

and the comparison between Equations 3.194 and 3.199 returns the
first part of the thesis, ie
() ()
max [l ,l+1 ]
 max [l ,l+1 ] (3.191)
l=0,1,...,n1 l=0,1,...,n1

(ii) If the condition expressed in Equation 3.192 is not verified, then,


from Equation 3.193 it appears that not all intervals have the same
relative width, and, thus, there exist l0 and l1 such that
[l0 ,l0 +1 ] < [l1 ,l1 +1 ] (3.200)
By the definition of maximum
[l1 ,l1 +1 ]  max [l ,l+1 ] (3.201)
l=0,1,...,n1

from Equations 3.200 and 3.201 it follows that


[l0 ,l0 +1 ] < max [l ,l+1 ] (3.202)
l=0,1,...,n1

Hence, Equations 3.195 and 3.202 lead to


1
n  n
[l ,l+1 ] < max [l ,l+1 ] (3.203)
l=0,1,...,n1
l=0

Substituting into the left-hand side of Equation 3.203 the right-hand


side of Equation 3.198 leads to
 n
n < max [l ,l+1 ]
l=0,1,...,n1

and therefore
< max [l ,l+1 ] (3.204)
l=0,1,...,n1

From Equations 3.191 and 3.204 it follows that


() ()
max [l ,l+1 ]
< max [l ,l+1 ] (3.205)
l=0,1,...,n1 l=0,1,...,n1

meaning that, if the condition in Equation 3.192 is not verified, the


inequality 3.191 is strict, and, hence, the second part of the thesis is
also proved.23

At this point it is possible to state the following theorem, which


(n)
identifies the vector of [0 ,n ] that solves both the optimisation
problem of Definition 3.39 and the constrained problem given in
Definition 3.40.

142

i i

i i
i i

minenna 2011/9/5 13:53 page 143 #175


i i

THE SECOND PILLAR: DEGREE OF RISK

Theorem 3.42. The grid of volatility intervals associated with the


() () () (n )
vector (1 , 2 , . . . , n1 ) [0 ,n ] is homogeneous and not
abnormal on the reduced space [0 , n ].

Proof First, it has to be proved that the vector


() () () (n)
(1 , 2 , . . . , n1 ) [0 ,n ]

solves the constrained problem 3.187.


From Equation 3.193 and from the Theorem 3.36 it follows that
( j1 ) ( j2 )
mf  mf for all j1 j2 (3.187)
() ()
and, thus, the grid of volatility intervals associated with (1 , 2 ,
()
. . . , n1 ) is homogeneous in the sense of Definition 3.40.
Second, it has to be proved that the same vector also solves the
optimisation problem 3.186.
(n)
To this end let (1 , . . . , n1 ) be a generic element of [0 ,n ]
() () ()
different from (1 , 2 , . . . , n1 ).
Since
() () () ()
(0 , 1 , 2 , . . . , n1 , n ) (0 , 1 , 2 , . . . , n1 , n() )

by Lemma 3.41, the following strict inequality holds


() ()
max [l ,l+1 ]
< max [l ,l+1 ] (3.205)
l=0,1,...,n1 l=0,1,...,n1

Denoting by l the value of l, l = 0, 1, . . . , n 1, such that


[l ,l+1 ] = max [l ,l+1 ] (3.206)
l=0,1,...,n1

the inequality 3.205 can be rewritten as


() ()
max [l ,l+1 ]
< [l ,l+1 ] (3.207)
l=0,1,...,n1

Moreover, by Equation 3.193, for all l = 0, 1, . . . , n 1 it follows


that
() () () ()
[l ,l+1 ] = max [l ,l+1 ] (3.208)
l=0,1,...,n1

and, consequently, the inequality 3.207 can be rewritten as


() ()
[l ,l+1 ]
< [l ,l+1 ] (3.209)
for generic l.
From the above inequality and Equation 3.183, it follows that
() ()
([l ,l+1 ]) ([l ,l+1 ])
mf < mf (3.210)

143

i i

i i
i i

minenna 2011/9/5 13:53 page 144 #176


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

At this point it must be observed that:


from Equation 3.187, for all l = 0, 1, . . . , n 1 it follows that
() () () ()
([l ,l+1 ]) ([l ,l+1 ])
mf  max mf (3.211)
l=0,...,n1

from Equations 3.183 and 3.206, for all l = 0, 1, . . . , n 1 it


follows that
([l ,l+1 ] ) ([l ,l+1 ])
mf  mf (3.212)

that is
([l ,l+1 ]) ([l ,l+1 ])
mf = max mf (3.213)
l=0,1,...,n1

Substituting the right-hand side of Equation 3.211 and the right-


hand side of Equation 3.213, respectively, into the left-hand side and
the right-hand side of Equation 3.210 yields
() ()
([l ,l+1 ]) ([l ,l+1 ])
max mf < max mf (3.214)
l=0,...,n1 l=0,...,n1

and, therefore, the grid of volatility intervals associated with


() () ()
(1 , 2 , . . . , n1 )

is not abnormal in the sense of Definition 3.39.

3.5.4 The optimal grid on the full space of volatilities [0, +[


On the reduced space of volatilities [0 , n ], for n fixed and 0 < 0 <
() () ()
n < +, the partition created by the vector (1 , 2 , . . . , n1 )
excludes an abnormal number of management failures in each of
the n intervals, and at the same time, ensures that the incidence of
failures is homogeneous with respect to all intervals.
The key property behind this result is that the intervals composing
this partition, ie
() () () () ()
[0 , 1 ], [1 , 2 ], . . . , [n1 , n() ] (3.215)

where
()
0 = 0 and n() = n

are related by
()
l = l 0 for all l = 0, 1, . . . , n (3.190)

144

i i

i i
i i

minenna 2011/9/5 13:53 page 145 #177


i i

THE SECOND PILLAR: DEGREE OF RISK

and, hence, by Definition 3.34, it is immediate to find that they have


a relative width which is constant and equal to .
Starting from the optimal grid given by Equation 3.215, it is rea-
() () ()
sonable to try to extend the solution (1 , 2 , . . . , n1 ) found on
the reduced space to cover the entire space of possible volatilities
[0, +[, by increasing the number of intervals and keeping their
relative width constant.
From this exploration it results that, however much the num-
ber of intervals can be increased to the left of 0 and to the right
of n , it is not possible to cover the residual space of volatilities
[0, +[ \ [0 , n ] by adding a finite number of intervals.
For example, attempting to partition the interval [n , +[ by pro-
gressively inserting intervals of relative width equal to , from Equa-
()
tion 3.190 yields that the first value after n = n is n+1 0 , the sec-
() ()
ond is n+2 = n+2 0 , the third is n+3 = n+3 and so on. In terms of
absolute width, the intervals that are added are wider and wider.24
This indicates that, as n increases, the portion of [n , +[ that is
not optimally partitioned is reduced. Nevertheless, it is clear that
there is no finite value of n that may exhaust this entire space of
residual volatilities by adding intervals with a relative width equal
to and, therefore, with a progressively greater absolute width.
In the same way it emerges that there is no finite n that can fully
cover the residual space of volatilities that is to the left of 0 , ie,
[0, n ].
Given the structural characteristics of the space of the volatil-
ities, in the calibration of the optimal grid it is suitable to pre-
serve the requirements of non-abnormality and homogeneity of the
management failures in the greatest possible number of intervals.
This means that all the intervals of the optimal grid should have a
constant relative width, except for the first and last intervals, which,
regardless of their upper (ie, 1 for the first) and lower (ie, n1 for
the last) extremes, necessarily present an infinite relative width.25
As a consequence, the values of 1 and n1 must be chosen
according to criteria which are exogenous with respect to those
related to the incidence of management failures.
With regard to n1 , since it corresponds to the minimum admis-
sible level of volatility for products with the highest degree of risk,
a value of 25% is chosen, which, using metrics such as the value-
at-risk and the expected shortfall, corresponds, on average, to an

145

i i

i i
i i

minenna 2011/9/5 13:53 page 146 #178


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

annual percentage loss26 between 30% and 50% of the invested cap-
ital depending on the levels and the volatility of the risk-free rates
and on the confidence level used.
With regard to 1 , whatever the optimal n, the requirement of an
increasing absolute width of the intervals suggests that this value
should be chosen consistently with an absolute width tendentially
limited to the first interval. At the same time, the value chosen for 1
must ensure the representativeness of the first interval, even consid-
ering that it is associated with the lowest degree of risk and that this
interval must contain the volatilities empirically observed in typi-
cal low-risk products as money market funds. A value of 1 equal
to 0.25% is in line with these requirements, because it ensures that
the first interval is fairly narrow but representative of a risk budget
realistic for safest products, as it results from market data on money
market funds and similar investments.
Once 1 and n1 are chosen, and given the requirement of con-
stant relative width for all intermediate intervals, for determining
the optimal grid on the space of volatilities [0, +[ it is sufficient to
identify the optimal number of intervals, n .
The ultimate goal of the entire methodology for the calibration
of the grid is to provide an indication of the overall riskiness of a
non-equity product through the mapping of the volatility intervals
into a qualitative scale composed of highly informative adjectives
for the average investor.
In this context, the total number of intervals must first reach a
good compromise between the complexity of the phenomenon to
be represented (and, therefore, the level of detail of the information
offered) and the need for a clear and immediate understanding by
investors who are interested in grasping the differences in the degree
of risk across products.
These considerations suggest it would be wise to put a constraint
on the minimum and maximum number of intervals of the optimal
grid. In particular, it seems reasonable that the number of risk classes
is between five and seven.
A grid with fewer than five classes would convey too little infor-
mation and, by combining in the same class products with hetero-
geneous risk profiles, would prevent the investor from identifying
those investment proposals that are actually in line with their risk
appetite.

146

i i

i i
i i

minenna 2011/9/5 13:53 page 147 #179


i i

THE SECOND PILLAR: DEGREE OF RISK

Table 3.1 The optimal grid

Volatility
intervals (%)
 
Risk classes min max

Very low 0 0.25


Low 0.25 0.6
Mediumlow 0.6 1.6
Medium 1.6 4
Mediumhigh 4 10
High 10 25
Very high 25 +

Conversely, a grid with more than seven classes would be exces-


sively detailed with respect to the real needs and abilities of dis-
crimination of the retail investor, who would find it difficult to fully
capture the different riskiness of the products just by reading the
adjectives used to illustrate the degree of risk.
In line with the results obtained from the analysis on the reduced
volatility space, in the full space [0, +[ the choice of the optimal
number of intervals between five, six and seven depends on the
number of management failures that characterise the intermediate
intervals.
Since a smaller relative width leads to a lower incidence of fail-
ures (see Proposition 3.37), the optimal n is therefore n = 7,
corresponding to a of about 2.5119.
The optimal grid of volatility intervals is shown in Table 3.1, where
for each interval an adjective that is representative of the associated
degree of risk is also reported.27

3.6 RISK CLASSIFICATION


The optimal grid of volatility intervals is an excellent reference for
the unique and objective determination of the degree of risk of any
non-equity investment product at the issue date and for its monitor-
ing over time aimed at ensuring the continued representativeness of
this indicator.
This grid is the result of the intersection between a simulation
model of the values of annualised volatility (where each value is
determined by using daily returns observed over a one-year period)

147

i i

i i
i i

minenna 2011/9/5 13:53 page 148 #180


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and a reliable model that forecasts the future behaviour of this finan-
cial variable by using the distributive properties of the limit diffusion
of the M-Garch(1,1).
To ensure the accuracy and comparability of information to
investors, the methodological assumptions about the depth of the
observation window of the returns and their sampling frequency
should guide the procedures for determining the risk class of newly
issued products and for its possible revisions over time.
In fact, the methodological assumptions contribute, together with
the characteristics of financial engineering, to the definition of crite-
ria of general validity for a proper classification of the initial degree
of risk of any non-equity product.
The classification is immediate in the case of products pursuing a
target risk that will inspire their investment policy and management
techniques. Identifying the degree of risk is sufficient to express this
target in terms of a point value (or a range) of annualised volatility
associated with the possible daily returns of the product, and to see
where this target is placed among the seven intervals that comprise
the optimal grid.
It should be stressed that, for this type of product, in the absence
of data on historical performance (and thus also on the volatility
actually realised in the past) solutions based on simulation of the
volatility of their potential returns would produce values consistent
with the preset target risk, given the invariance properties of the
main continuous-time stochastic models that are used in practice to
implement this kind of simulation.
The simulative solution becomes useful in the particular case in
which the target risk of the product identifies a range of volatility
values that are partially overlapped to two or more intervals of the
optimal grid.
Above all, however, simulative solutions are of paramount impor-
tance in determining the initial risk class of benchmark and return-
target products.
In benchmark products these solutions require the development
in advance of specific stochastic volatility models calibrated on the
term structure of the volatilities of the benchmark adopted. In addi-
tion, where an active management style is provided (and not the pure
replication of the benchmark), models must be properly integrated
to reflect more or less important deviations from the benchmark that

148

i i

i i
i i

minenna 2011/9/5 13:53 page 149 #181


i i

THE SECOND PILLAR: DEGREE OF RISK

Figure 3.6 Trajectories of a five-year subordinated fixed-coupon bond

120
110
100
90
Bond value

80
70
60
50
40
30
20
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Time

may come from endogenous sources of risk, related to the specific


management techniques used.
Through these models it is possible to simulate on an annual time
horizon (and, therefore, consistently with the assumptions underly-
ing the calibration of the optimal grid) the evolution of daily returns
of the product and, in this way, determine its degree of risk by look-
ing at the average of the annualised volatilities of each trajectory
obtained via simulation.
A similar procedure leads to identification of the degree of risk of
return-target products.
Also in this case it is sufficient to simulate the annualised volatil-
ity of the products potential returns, except that the time horizon of
the simulation must coincide with the recommended time horizon,
which, as explained in Section 4.2, is implicit in the products specific
financial structure. Return-target products are, in fact, the only prod-
ucts for which the specific choices of financial engineering and the
recommended investment horizon make it necessary to deviate, in
determining the degree of risk, from the rule of a simulative horizon
equal to that used to calibrate the optimal grid.
Clearly, as seen in Section 2.1, the models used for simulation
must incorporate all the risk factors of the investment, including,
for example, the credit risk of the issuer or the subject that provides
the protection or the guarantee, if different.

149

i i

i i
i i

minenna 2011/9/5 13:53 page 150 #182


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 3.6 shows the simulated trajectories of the possible daily


returns of a subordinated fixed-coupon bond expiring in five years.
The trajectories hit by a credit event exhibit a downward jump that
not only affects the fair value of the product and the probabilistic sce-
narios (as argued in Chapter 2), but also results in a higher volatility,
hence signalling the remarkable riskiness of this product.

3.7 DETECTING MIGRATIONS


The property of homogeneity of the optimal grid excludes incentives
for operators to select the intervals at lower risk of migration, ie, the
risk of moving to a different risk class from the original one. Further-
more, the non-abnormality of the volatility intervals ensures that a
migration can only occur due to structural changes to the products
investment policy or unexpected and substantial changes in market
conditions,28 provided that migrations are defined consistently with
the methodology used to obtain the optimal grid of Figure 3.1.
In addition, when historical data on volatility is available, the
methodological assumptions for detecting migrations prevail on the
financial engineering of the products and are valid for any type of
underlying structure.
It follows that a migration in the degree of risk occurs when the
annualised volatility of daily returns realised over the last year falls
for more than three consecutive months in one or more intervals
other than that associated with the initial risk class. If, during the
three-month period, the volatility lies on more different intervals
then a criterion of prevalence is applied: the product is assigned to
the risk class in which the volatility spends most of this time period.
Formally, the following definition applies.
(i )
Definition 3.43. For i = 1, 2, . . . , 7, let min and (max
i)
denote, respec-
tively, the lower bound and the upper bound of the ith optimal
volatility interval. Then, at day k a migration is said to occur from
the ith risk class to the jth risk class, with j = 1, 2, . . . , 7 and j i,
if, for K = 63 and for all l = 0, . . . , K 1, one of the following two
inequalities holds
(i)
kl < min (3.216)
kl > (max
i)
(3.217)

150

i i

i i
i i

minenna 2011/9/5 13:53 page 151 #183


i i

THE SECOND PILLAR: DEGREE OF RISK

and
1
K  K
1 
1{ ( j) ( j) = max 1{ (h ) (h ) (3.218)
k l [min ,max ]} h=1,2,...,7 k l [min ,max ]}
l=0 hi l=0

where


k l

252
kl = (Rs Rkl )2
s=kl+1
Rs is the logarithmic return realised by a non-equity product
at the day s;
k l

1
Rkl = Rs
s=k l+1
= 252.

The rule of three months to detect migrations corresponds to the


length of the vector of data used as input, inside the calibration, to
construct one-day prediction intervals for the volatility.
Intuitively, this choice arises from the consideration that the
detection of the migrations has characteristics different from but
complementary to the identification of the management failures.
A management failure is a one-off event: given a hypothetical
product managed by an automatic asset manager, volatility obtained
via simulation for a given day is compared with the bounds of a
prediction interval that, by its property of adaptivity, reacts dynami-
cally to information on past volatility, hence exhibiting a path depen-
dency with respect to the trajectory of the volatility, which implies a
relatively small width of the interval itself.
In order to avoid an excessive sensitivity of the prediction inter-
vals, the estimation of their parameters must be performed by
using a sufficient number K of observations of past volatility; in
Section 3.4.3 it emerged that the best K is equal to 63.
A migration represents a permanent change in the riskiness of
a product which is real and, therefore, not necessarily managed in
accordance with the assumptions provided by the model for the
automatic asset manager (see Section 3.2). In fact, the volatility of
a product actually existing on the market can move more-or-less
symmetrically with respect to the extremes of any of the intervals
provided by the optimal grid. In addition, being the result of an
optimisation process, these intervals are not dynamically updated

151

i i

i i
i i

minenna 2011/9/5 13:53 page 152 #184


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

in relation to the pattern of the products volatility (ie, they are not
path dependent).
Consequently, the violation of any of these intervals can be con-
sidered persistent if it lasts for a sufficiently long period of time. In
particular, three months is an appropriate time reference, since, com-
pared with the prediction intervals obtained from Garch diffusions,
the intervals of the optimal grid have constant and non-adaptive
width (which increases the chances of exceeding their extremes),
but they are also wider in order to allow a reasonable leeway in
handling the ordinary activity of an asset manager or ordinary and
temporary movements of the reference markets of the product.
Due to the consistency with the assumptions used in the calibra-
tion of the optimal grid, the concept of migration given by Defini-
tion 3.43 constantly ensures the timely updating of the information
on the degree of risk.
A time rule of less than three months could entail, all things being
equal, an excessive number of migrations, many of them being spu-
rious because they are not attributable to stable changes in the risk
profile of the product. This helps to prevent fictitious cases of insta-
bility of the degree of risk and, therefore, to exclude an informative
set that is barely reliable and of little use to investors and that could
also create some difficulty for normal asset-management activity. On
the other hand, an observation period for the volatility longer than
three months would result in a tendential inertia in the risk mea-
surement and representation, again invalidating the significance of
the message conveyed to investors.
For similar reasons, in Definition 3.43 the data is calculated using
the volatility of daily returns realised over the last year (ie, = 252).
The adoption of a larger basis (eg, weekly or even monthly) for
calculating returns would produce a volatility smoothing, that is,
a reduction of its variability and the containment of this variable
within narrower margins. Similarly, a time window for observing
returns longer than one year would also favour a smoothing of the
data on volatility, because information on the recent dynamics of the
products risk would be mediated (and thus partially compensated)
by that further back in time. The net effect would be a delay in
detecting cases of migration and, therefore, in updating the degree
of risk with a systematic underestimation or overestimation of the
actual risk of products.

152

i i

i i
i i

minenna 2011/9/5 13:53 page 153 #185


i i

THE SECOND PILLAR: DEGREE OF RISK

Figure 3.7 shows the migrations of the equity index Standard &
Poors 500 that occurred over the period January 2001January 2011
as determined in line with Definition 3.43.
Figure 3.8 shows the migrations of the same equity index deter-
mined according to a grid different from the one reported in Fig-
ure 3.1 and thus compliant with neither the requirements of non-
abnormality and homogeneity of the management failures nor that
of an increasing absolute width of the volatility intervals (eg, the
fourth and the fifth intervals are equally wide). Each value of annu-
alised volatility is obtained from the weekly returns of the index over
the last five years. Migrations occur when the volatility lies outside
the original interval for four consecutive months.
The areas of different colours in the two figures indicate the inclu-
sion in a different risk class, and, thus, every alternation of colours
corresponds to a migration event.
As expected, the two figures offer quite different representations
of the evolution of the indexs risk profile over the analysed time
horizon.
Figure 3.7 emphasises that over the entire period the degree of risk
of the index has been remarkable: the volatility has always been in
the last two risk classes (sixth and seventh), which, as in Table 3.1, are
labelled respectively with the adjectives high and very high. In
particular, there is a clear and direct correspondence across the clus-
ters of high variable returns and the times in which the associated
annualised volatility reached its upwards peaks. On the contrary,
when daily returns showed a pretty regular and essentially flat pat-
tern, the volatility values were lower (typically around 10%), even
if always consistent with a high degree of risk.
It is evident that the combination of the optimal grid with a suit-
able rule for detecting migrations definitely allows the prompt iden-
tification of the alternation of different volatility regimes and, hence,
also the effective movements across qualitative classes, ensuring the
persisting meaningfulness of the information about the degree of
risk.
Specifically, Figure 3.7 reveals that four migrations occurred over
a period of nearly 10 years, meaning, on average, around two migra-
tions every five years, which is also in line with the regulatory prac-
tices adopted in many countries which require an annual update of
information provided by precontractual disclosure documentation.

153

i i

i i
i i

minenna 2011/9/5 13:53 page 154 #186


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 3.7 Migrations of the Standard & Poors 500 with respect to the
optimal grid (January 2001January 2011)

Risk class 5 (vol. 410%)


Risk class 6 (vol. 1025%)
Risk class 7(vol. >25%)

(a)
1,500
1,000
500

(b)
10
% 0
10
(c)

40

25
%
10
4
Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011

(a) Index value, (b) daily returns, (c) annualised volatility.


Source: Datastream.

Figure 3.8 gives a quite different picture of the riskiness of


the considered index. The first thing that grabs attention is the
above-mentioned smoothing effect on the volatility data. The non-
optimality of the grid and the rules to calculate each volatility value
and to detect migrations significantly mitigate the values taken by
the volatility which, indeed, over the period analysed, varied in a
range which is clearly quite narrow compared with that in Figure 3.7.
This smoothing effect is due to both the excessive number of years
(five) for observing returns and the adoption of a weekly basis to
compute the returns. These choices attenuate both the levels of the
volatility and its fluctuations, the latter being in fact much flatter
than those exhibited by part (c) of Figure 3.7.
Moreover, it is worth noting that the volatility smoothing weakens
the strict correspondence between clusters of high variable returns
and times of greater volatility as well as across periods of basically
flat returns and times of lower volatility.

154

i i

i i
i i

minenna 2011/9/5 13:53 page 155 #187


i i

THE SECOND PILLAR: DEGREE OF RISK

Figure 3.8 Migrations of the Standard & Poors 500 with respect to a
non-optimal grid (January 2001January 2011)

Risk class 5 (vol. 1015%)


Risk class 6 (vol. 1525%)
Risk class 7 (vol. >25%)

(a)
1,500
1,000
500

(b)
10
% 0
10

(c)

40

% 25
15
10

Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan Jan
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011

(a) Index value, (b) daily returns, (c) annualised volatility.


Source: Datastream.

In other words, there is a delay in the ability of the volatility data


to reflect periods of increased (diminished) returns variability, so
that in this framework migrations are signalled with a certain lag
with respect to the actual times to which they should be referring.
Besides underestimating volatility, this framework also necessar-
ily produces a lower number of migrations: two transitions instead
of the four obtained with the first methodology, which can give
the appearance of a greater stability in the risk profile even if it
does not correspond to the current market conditions (or, in the
case of non-equity products, to the current features of the financial
investment).
A last issue is related to the adoption of a grid of volatility intervals
which is completely different from the optimal grid resulting from
the calibration procedure described in this chapter.

155

i i

i i
i i

minenna 2011/9/5 13:53 page 156 #188


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Despite the fact that the two grids have the same number of risk
classes, the intervals mapped into these classes are quite heteroge-
neous. Focusing just on the fifth and sixth classes, the optimal grid
gives
5th class 410%
6th class 1025%
while the grid behind Figure 3.8 gives
5th class 1015%
6th class 1525%
Recalling that the intervals of the above grid are identified by val-
ues of annualised volatilities of weekly returns, while the intervals of
the optimal grid are identified by values of annualised volatilities of
daily returns, it is interesting to observe that the upper extreme of the
fifth class of the optimal grid coincides with the lower extreme of the
same class in the alternative grid. This implies material differences
in the risk attribution and also in detecting migrations. Specifically,
the latter grid requires much higher volatilities than the former to
fit a product in the upper risk classes and it is reasonable to guess
that the first four classes are also identified according to a similar
criterion, which is likely to favour the concentration of the prod-
ucts in the first classes, namely those which represent relatively safe
investment proposals.

3.8 CLOSING REMARKS


The second pillar of the risk-based approach is the degree of risk,
which provides a synthetic indication of the overall riskiness of a
non-equity product along its recommended investment time hori-
zon. This indicator looks at the variability of the potential results
achievable at any intermediate time by using a meaningful and
straightforward risk indicator: the annualised volatility of the daily
returns. This risk metric is compared with a grid of increasing volatil-
ity intervals in order to determine the degree of risk of the product,
and this information is then conveyed to investors by mapping the
volatility figure into an ordered qualitative scale of risk classes with
a high signalling power.
From a technical point of view, the calibration of the grid
is the core of this pillar. Naive solutions not backed by robust

156

i i

i i
i i

minenna 2011/9/5 13:53 page 157 #189


i i

THE SECOND PILLAR: DEGREE OF RISK

quantitative methodologies are discarded, and the focus is on


determining a grid whose volatility intervals have an increasing
absolute width to comply with the general principle: more risk,
more losses.
The optimal grid must also meet a market feasibility requirement
aimed at ensuring that the risk budget associated with each of its
intervals is actually sustainable on the market. This means that asset
managers committed to a specific risk budget realise a volatility
which, except in the case of sudden and significant shocks, is in
line with the reasonable expectations of the market about the future
volatility. The research of the market feasibility must not to be per-
formed separately for any single volatility interval. Rather, it is a
requirement which involves all the intervals of the grid at the same
time, since the width of any interval is strongly affected by that of
the adjacent intervals.
In order to determine a grid compliant with the market feasibility
requirement in the sense just recalled, the first step is to objectively
qualify when the volatility of a product is not in line with the expec-
tations of the market about the future volatility, since this signals the
occurrence of a management failure.
To this end, after having defined, in Section 3.2, a model for
the behaviour of an automatic asset manager that operates sym-
metrically over a given risk budget, the volatility of the potential
returns of a product managed by this asset manager is simulated
as described in Section 3.3. The volatility realised according to this
simulation procedure is then compared with the bounds of volatility
prediction intervals representative of the markets expectations and
obtained from the diffusion limit of a discrete M-Garch(1,1) model,
the latter being analytically derived in Section 3.4. As explained
in Section 3.5.1, if the annualised volatility of the products daily
returns lies outside the corresponding prediction interval, then a
management failure occurs.
At this point, the optimal grid is obtained by requiring that the
incidence of management failures, with respect to the different
intervals of the grid itself, is not abnormal and homogeneous. In
particular, non-abnormality is verified if a volatility interval admits
a limited number of management failures; while homogeneity is ver-
ified if the incidence of failures is substantially the same across all
the intervals of the grid.

157

i i

i i
i i

minenna 2011/9/5 13:53 page 158 #190


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

These two properties can always be satisfied if the optimal grid


has to make a partition of a reduced volatility space (Section 3.5.3).
Moving to the full volatility space, the same properties can be pre-
served for all intervals other than the first and last, whose extremes
are therefore defined according to the exogenous criteria illustrated
in Section 3.5.4. The final step of this calibration methodology iden-
tifies the optimal number of intervals in the grid, consistent with the
target of minimising the incidence of management failures for the
intermediate intervals and with the need of ensuring the represen-
tativeness and the meaningfulness of the information provided by
the degree of risk.
The resulting optimal grid divides the full space of the possible
volatilities into seven increasing intervals which correspond to an
equal number of qualitative risk classes: very low, low, medium
low, medium, mediumhigh, high and very high. This allows the
degree of risk to be determined for any non-equity product (see
Section 3.6) and this information to be updated promptly in the face
of a migration to a different risk class that could occur at any date
during the recommended investment time horizon (see Section 3.7).
Hence, exactly like the first pillar, the second pillar of the
risk-based approach described in this chapter is strongly related,
although in a different way, to the holding period to be recom-
mended to the investor. In fact, as pointed out in Chapter 2, the
first pillar takes a snapshot of the performance risk of the non-equity
product at two specific points in time which correspond to the begin-
ning and the end of the recommended investment time horizon,
while the degree risk reflects the riskiness of the product along the
entire time interval identified by this horizon.
The next chapter will be devoted to the issue of developing a
quantitative methodology to determine the recommended invest-
ment time horizon by taking into account the financial structure of
the non-equity product and the related profiles of risk and costs.

1 The recommended investment time horizon is the third pillar of the risk-based approach and
the methodology for its determination will be presented in Chapter 4.
2 This issue is addressed explicitly for the second pillar since, for the other two pillars of the risk-
based approach, the criteria according to which one can determine whether an information
update is necessary are clear.
3 The value of v which satisfies Equation 3.4 is determined numerically through a root solving
routine available in most statistical software.

158

i i

i i
i i

minenna 2011/9/5 13:53 page 159 #191


i i

THE SECOND PILLAR: DEGREE OF RISK

4 The risk-free rate is assumed to be constant. It is worth pointing out that, by using stochastic
models of the term structure of interest rates, the optimal grid does not undergo significant
changes.
5 For the sake of simplicity no correlation is assumed, ie, = 0. However, specific sensitivity
analysis highlighted that the outcome of the calibration procedure is almost invariant with
respect to the value assigned to this parameter.
6 In particular, the use of overly long datasets means that the latest information on develop-
ments in the riskiness of the products are averaged with those further back in time, resulting
in a smoothing of the volatility value. For more details about this see Section 3.7.
7 It is an additive model in the logarithm of the variance. See Geweke (1986), Pantula (1986)
and Mihoj (1987).
8 A similar approach inspired the model for market abuse detection developed by Minenna
(2003).
9 The subscripts denote that the Markov chain moves from k with a time interval of 1.

10 By Equation 3.12 ln k2+1 is k -measurable, which requires conditioning with respect to the
sigma algebra k1 .
11 The subscripts denote that the Markov chain moves from kh with a time interval of h.

12 By Equation 3.14 ln (2k+1)h is kh -measurable, which requires conditioning with respect to
the sigma algebra (k1)h .

13 The subscripts denote that the Markov chain moves from s with a time interval of h.
h
14 As ln t2 is hs -measurable, it is necessary to condition it with respect to the sigma algebra
hsh .

15 Unlike the model for the simulation of the hypothetical product managed by an automatic
asset manager which is defined under the risk-neutral measure Q, the prediction made using
the diffusion limit of the M-Garch(1,1) is, by construction, a forecast issued under the real-
world measure, ie, the measure under which each simulated trajectory of volatility is consid-
ered when it becomes the trend realised by the past volatility, with respect to which predictions
on the future values of this variable have to be made.
2
16 In equivalent terms, the probability measure Pln t is the only solution to the martingale
problem defined by the coefficients (0 + 21 E(ln |Zt |) + (1 1) ln t2 ) and 421 var(ln |Zt |)
and by the initial condition ln 02 = l0 . For more details see Stroock and Varadhan (1979).

17 As pointed out by Nelson (1990), the weak convergence implies, for example, that, given any
sequence of times t1 , t2 , . . . , tN > 0, the joint probability distribution of
h h h
{ln t21 , ln t22 , . . . , ln t2N }

converges for h 0 to the joint probability distribution of

{ln t21 , ln t22 , . . . , ln t2N }

More generally, the weakhconvergence impliesh


that if f () is a continuous functional of the
sample trajectory of ln t2 , then f (ln t2 ) converges in distribution to f (ln t2 ) for h 0.
18 A different way to obtain the same weak convergence result is based on the analytical
derivation of the diffusion limit of augmented Garch processes (Duan 1997).
h
19 As the process ln t2+h is ht -measurable, the expected values appearing in the right-hand sides
of Equations 3.333.35 are conditioned to the sigma algebra hth .
20 H is the total number of observations of annualised volatility obtained for each simulated
trajectory and is equal to 252(T 1) + 1, ie, for T = 2 years, H = 253 (see Section 3.3).
21 It is worth remarking that only by abandoning stochastic volatility (ie, using, in the simula-
tions, a geometric Brownian motion with constant coefficients) are there admissible values
of K at which the selected confidence level is respected. The transition to the stochastic

159

i i

i i
i i

minenna 2011/9/5 13:53 page 160 #192


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

volatility model referred to in Section 3.3 results in a more or less significant violation of
the chosen confidence level. The numerical tests showed that, across all eligible Ks (namely
21, 42, 63, . . . , 126) such violations are minimal for K = 63.
22 More specifically, the revision of the first or the last interval influences only one other interval
of the grid, while interventions on any of the intermediate intervals necessarily affect two
other intervals.
23 The proof that if Equation 3.192 holds then Equation 3.191 must be an equality is immediate.

24 This is due to the fact that, from Equation 3.188, > 1.

25 For any n, the relative width of the first interval is 1 /0 = + and the relative width of the
last interval is +/n1 = +.
26 The one-year time horizon for the calculation of the loss is due to the fact that the intervals of
the optimal grid are expressed in terms of annualised volatility.
27 The numbers in this figure have been rounded.

28 As seen in Section 3.5.4, these requirements only belong to the intermediate intervals and not
the first and seventh volatility intervals, which, compared with the others, are less exposed to
migration risk because the volatility can vary only upwards (in the case of the first interval)
and downwards (in the case of the last interval).

160

i i

i i
i i

minenna 2011/9/5 13:53 page 161 #193


i i

The Third Pillar: Recommended


Investment Time Horizon

The third pillar of the risk-based approach is the recommended


investment time horizon, which represents a recommendation on
the holding period of the non-equity product, formulated in relation
to its specific financial structure in order to ensure the consistency
of this synthetic indicator with the first two pillars of the approach.
As the determination of the recommended time horizon is affected
by the financial structure and particular features of the product in
question, these must be classified into one of three types of product:
return-target, risk-target or benchmark.
Return-target products and products backed by a financial guar-
antee (also when exhibiting risk-target or benchmark structures)
feature a financial engineering that articulates the cashflows and
investment risks in accordance with a precise structure, which
is spread over a specific period of time. It follows that for these
products the recommended investment time horizon is that implicit
in their financial engineering, as it is only over this period that the
peculiar riskreturn profile of the non-equity investment is fully
defined. In fact, the engineering of these products (and sometimes
also their asset-management techniques) are aimed at pursuing, for
a given date, a predetermined result or a result which is dynamically
updated over time. Therefore, this implicit time horizon necessarily
becomes the reference maturity used to perform the calculations
required to determine the other two pillars that, together with the
recommended time horizon, effect an integrated representation of
the risks of these products. With regard to the first pillar, the implicit
time horizon represents the bridge between the assessments of the
performance risk of the product made at the issue date, through
the financial investment table, and at maturity, through the table of
probabilistic scenarios. Not surprisingly, by taking the final values

161

i i

i i
i i

minenna 2011/9/5 13:53 page 162 #194


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

of the investment corresponding to each of the four scenarios


and discounting them for a period equal to this time horizon, the
average of these discounted values is a good proxy for the fair value
of the product. With regard to the second pillar, the same implicit
horizon identifies the period over which the annualised volatility
of the potential daily returns of the product has to be simulated in
order to properly define its degree of risk.
Unlike return-target products, unguaranteed risk-target or bench-
mark products rely on simpler financial engineering solutions. The
mechanisms underlying the product could continue working indefi-
nitely without obeying any predetermined algorithm that rebalances
the portfolio of assets composing the product in order to achieve
some target return at a specific point in time. Consequently, in order
to determine the recommended investment time horizon for these
products, an exogenous criterion must be introduced that, framed
in a suitable quantitative setting, returns information of concrete
interest for the average investor.
The exogenous criterion adopted in the risk-based approach con-
siders the time at which all costs incurred will be reasonably recov-
ered, taking into account the riskiness of the investment. Besides
being intrinsically prudential, the idea of looking at the moment of
the cost recovery is somewhat equivalent to focusing on the time at
which investors will achieve a target return of zero. In this approach,
the exogenous criterion of cost recovery qualifies the recommended
investment horizon of risk-target and benchmark products accord-
ing to a logic which is related to the rule that identifies the time hori-
zon of return-target products. Moreover, from the investors point
of view, an indicator based on the time at which costs break even
is very useful for understanding whether a certain product is or is
not able to amortise the costs incurred in a period of time not longer
than the one matching their liquidity attitudes.
This chapter illustrates in detail the methodology developed to
objectively determine a recommended time horizon within which
cost recovery is achieved in line with an appropriate probabilistic
characterisation. This characterisation allows for consistency with a
few key principles mainly aimed at ensuring a correct ordering of
the products in terms of the corresponding recommended time hori-
zons, with respect to their riskiness, the latter measured by using the
same metric as the second pillar, namely the volatility. It will turn

162

i i

i i
i i

minenna 2011/9/5 13:53 page 163 #195


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

out that the recommended investment time horizon corresponds


to a well-defined minimum point in the domain of the cumulative
probability distribution of the first times the stochastic process of
the products value hits a barrier set to the level of cost recovery.
Intuitively, for any given costs regime, the shape of this cumulative
distribution depends on the products volatility, so that, thanks to
proper methodological assumptions and to its properties of min-
imum time, the recommended investment time horizon is a non-
decreasing function of the volatility, and, thus, it enables the correct
ordering of different products.
As mentioned above, in general this methodology does not apply
to return-target products because, in most cases, they are already
characterised by an implicit investment time horizon. In this regard,
however, the last part of the chapter shows how this implicit horizon
represents a mandatory time reference only if the product is illiquid
and, therefore, any possibility of closing the position before matu-
rity is precluded ex ante. Conversely, if the product is assisted by
specific liquidity or liquidability provisions that allow an early exit
from the investment, the information about the time horizon implicit
in the structure can be usefully supplemented by the indication of
the minimum investment time horizon to be determined in a logic
of cost recovery and, therefore, using a methodology close to that
implemented for risk-target and benchmark products.

4.1 THE MINIMUM TIME HORIZON FOR RISK-TARGET AND


BENCHMARK PRODUCTS
Risk-target and benchmark products are by construction time-vary-
ing portfolios of assets whose revision, although unknown ex ante,
is not intended to achieve a target return at a particular time. These
types of non-equity products lack a boundary condition that is effec-
tively binding at a given future date which unambiguously identi-
fies the time horizon to be recommended to the investors. In fact,
products pursuing a risk target or managed with reference to a spe-
cific market segment (identified by a benchmark) usually exhibit lin-
ear payoff structures that can be suitably represented through com-
mon stochastic differential equations, such as the geometric Brown-
ian motion, which may be slightly revised (in the case of bench-
mark products) to reflect features connected to the their peculiar
management style.

163

i i

i i
i i

minenna 2011/9/5 13:53 page 164 #196


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

In line with the above considerations, the general framework in


which the methodology is developed to determine the minimum
recommended time horizon for these products considers a geomet-
ric Brownian motion with constant coefficients. In these controlled
conditions, it is possible to derive closed formulas to approach the
problem of interest and to correctly assess the sensitivities of these
formulas with respect to the variables involved.
Formally, the dynamics of the products value over time can be
defined as follows.
Definition 4.1. Under the risk-neutral measure Q the dynamics over
time of a non-equity product with issue price equal to notional
capital (NC) are denoted by the stochastic process {St }t0 and are
described by the following stochastic differential equation

dSt = (r rc)St dt + St dWt (4.1)

whose initial condition S0 is defined as

S0 = NC ic (4.2)

and where
ic > 0 are the initial costs charged,
r is the risk-free rate,
rc denotes the constant running costs taken on a continuous-
time basis,
> 0 is the volatility of the product,
{Wt }t0 is a standard Brownian motion under Q.

Remark 4.2. The solution of the stochastic differential equation 4.1


is
St = S0 exp((r rc 12 2 )t + Wt ) (4.3)
Having defined the model that describes the dynamics of the
products value, the identification of a probabilistic criterion for
the determination of a recommended investment time horizon con-
nected with the event of cost recovery requires the fulfilment of some
general intuitive principles of coherence.
1. An increase in the cost level must always be related to a cor-
responding increase in the time horizon, the other variables
being fixed.

164

i i

i i
i i

minenna 2011/9/5 13:53 page 165 #197


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

2. An increase in the volatility level must always be related to a


corresponding increase in the time horizon, the other variables
being fixed.
3. The cost recovery must be evaluated at any point in time and
not only at a particular date.

Principle 1 is a strict minimum requirement in order to convey


coherent information to the investors.
Principle 2 is somewhat related to the idea that the riskier the
product the longer the minimum time to be spent in the financial
investment in order to achieve a particular expected return (ie, more
volatility, more time). It is worth noting that this principle plays
a key role in developing the full methodology, as it clearly entails
that the recommended time horizons of products exhibiting different
volatilities must be properly ordered.
Principle 3 is a reasonable request due to the general assumption
that these products are perfectly liquid at any point in time so that
their costs may happen to be recovered theoretically at every time
since inception.1
The dynamics assumed by Definition 4.1 allow the event of cost
recovery and the associated time of occurrence to be approached
according two alternative probabilistic characterisations. The first
is a strong characterisation which, as explained in Section 4.1.1,
requires the choice of a probability level deemed sufficiently sig-
nificant and the determination of the recommended time horizon as
the time at which a corresponding percentage of the trajectories of
the process {St }t0 are equal to or above a barrier which is represen-
tative of the break-even cost. However, it will be shown that such
characterisation has several serious drawbacks and it is not com-
patible with the three principles stated above, hence not allowing a
correct sorting of products with different volatilities.
The second characterisation, as explained in Section 4.1.2, leads to
the determination of the recommended time horizon as the first time
within which a certain number of trajectories reached the barrier of
the cost recovery at least once, and, hence, it is weaker than the first
characterisation but it is the only one able to produce time horizons
consistent with the principles above. In fact, it will be shown that, by
studying the cumulative probability distribution of the first-passage
times of the process {St }t0 for the barrier, fundamental results are

165

i i

i i
i i

minenna 2011/9/5 13:53 page 166 #198


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

obtained that allow the time horizon and the associated level of sig-
nificance (expressed in terms of cumulated probability) to be ascer-
tained as minimum values that depend on the cost regime and on
the volatility of any product and that satisfy the principle more
volatility, more time.

4.1.1 The strong characterisation of the cost-recovery event


Given the stochastic differential equation 4.1 that governs the
dynamics of the product over time, a natural way to model the
cost recovery is to study the probability that the product will be
above or equal to a certain barrier NC, starting from a level S0 < NC
that implicitly embeds the initial charges applied to the product. In
formal terms this probability is given by

Q(St  NC) (4.4)

Substituting the explicit solution of Equation 4.3 into Equa-


tion 4.4, it is straightforward to obtain a closed formula for the above
probability, as stated in the following proposition.

Proposition 4.3. The following equality holds


 
(r rc 12 2 )t ln[NC /S0 ]
Q(St  NC) = N (4.5)
t
where N [] is the cumulative probability distribution of a standard
normal random variable.

Proof By using Equation 4.3 it follows that

Q(St  NC) = Q[S0 exp((r rc 12 2 )t + Wt ) > NC] (4.6)

dividing for S0 and taking the logarithm


  
NC
Q(St  NC) = Q ln[exp((r rc 12 2 )t + Wt )] > ln
S0
properly rearranging
 
ln[NC /S0 ] (r rc 12 2 )t
Q(St  NC) = Q Wt >

for the symmetry of the normal random variable
 
ln[NC /S0 ] (r rc 12 2 )t
Q(St  NC) = Q Wt <

166

i i

i i
i i

minenna 2011/9/5 13:53 page 167 #199


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

As Wt N (0, t), standardising yields


 1 
(r rc 2 2 )t ln[NC /S0 ]
Q(St  NC) = N (4.5)
t

This approach leads to a strong characterisation of the cost-


recovery event, since it is certain that any time t is associated with
a given percentage of trajectories that will be above the cost bar-
rier NC. In this context, it seems reasonable to try to determine the
recommended investment time horizon by fixing a target level of
probability to attain and then recovering the time t corresponding
to that probability, ie

t : Q(St  NC) = (4.7)

Despite its simplicity, this method presents some serious draw-


backs from both the theoretical and the analytical sides.
From the theoretical point of view, the strong characterisation
of the cost-recovery event, by definition, does not consider what
happens during the time interval [0, t [; in fact, a trajectory that is
considerably beyond the cost-recovery barrier at time t may well
show a negative return along the interval [0, t [. In substance, this
criterion is blind to the overall dynamics of the product, which is
supposed to be liquid by principle 3 of Section 4.1 and, hence, can
be sold at any time at market price by the investor. Another way
to describe this phenomenon is that, adopting the strong character-
isation of cost recovery, a lock in the liquidity up to the time t is
implicitly assumed.
From an analytical point of view, this characterisation exhibits
non-homogeneous behaviour with respect to both the volatility and
the time. In fact, a simple investigation of the formula 4.5 highlights
that when increases, the overall probability that is achievable tends
always to decrease for sufficiently long times; this implies that, for
each fixed level of volatility corresponding by hypothesis to a spe-
cific product, the behaviour of the cumulative probability distribu-
tion with respect to the time changes drastically. Moreover, when
t becomes very large, the probability of cost recovery is subject to
a regime change depending on the relationship between drift and
volatility, as summarised by the quantity (r rc 12 2 ). And, for the

167

i i

i i
i i

minenna 2011/9/5 13:53 page 168 #200


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

previous finding, this phenomenon is exacerbated when the volatil-


ity assumes significant but reasonable values (eg, of the order of
30%); in these cases, the probability decreases very quickly even for
very short times.
Both of these findings are formalised in the following proposition.

Proposition 4.4. The following limit representations hold



lim Q(St  NC) = 0







lim Q(St  NC) = 0

t0



1 2
lim Q(St  NC) = 1 for r rc 2 > 0 (4.8)
t



lim Q(St  NC) = 12 for r rc 12 2 = 0



t



lim Q(St  NC) = 0 for r rc 2 < 0
1 2

t

Proof By direct calculation


 
(r rc 12 2 )t ln[NC/S0 ]
lim Q(St  NC) = lim N
t
= N [] (4.9)

Hence
lim Q(St  NC) = 0 (4.10)

By direct calculation
 
(r rc 12 2 )t ln[NC /S0 ]
lim Q(St  NC) = lim N
t0 t0 t
= N [] (4.11)

Hence
lim Q(St  NC) = 0 (4.12)
t0

By direct calculation
 1 
(r rc 2 2 )t ln[NC /S0 ]
lim Q(St  NC) = lim N
t t t
 
(r rc 12 2 ) t
= lim N
t
What happens to the limit depends on the sign of the quantity
(r rc 12 2 ).

168

i i

i i
i i

minenna 2011/9/5 13:53 page 169 #201


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.1 Plot of the function Q(St  NC) with respect to time t
(ic = 2%, r rc = 3.5%)

1.0
0.9
= 5%
0.8 = 15%
= 25%
0.7 = 35%
= 45%
Q(St NC)

0.6
0.5
0.4
0.3
0.2
0.1
0
0 10 20 30 40 50 60 70 80 90 100
t (years)

If r rc 12 2 > 0, then
lim Q(St  NC) = N []
t

So
lim Q(St  NC) = 1 for r rc 21 2 > 0 (4.13)
t

If r rc 12 2 = 0, then
lim Q(St  NC) = N [0]
t

So
1
lim Q(St  NC) = 2 for r rc 12 2 = 0 (4.14)
t

If r rc 12 2 < 0, then
lim Q(St  NC) = lim Q(St  NC) = N []
t t

So
lim Q(St  NC) = 0 for r rc 21 2 < 0 (4.15)
t

Figure 4.1 reproduces these analytical results. In fact, the blue,


dashed green and orange lines with levels of volatility for which
r rc 12 2 > 0 clearly show a convergence to 1. The sky-blue and
purple lines with levels of volatility for which r rc 12 2 < 0 tend
clearly to 0, with an increasing speed of convergence.

169

i i

i i
i i

minenna 2011/9/5 13:53 page 170 #202


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Such irregular behaviour makes it very difficult to identify a


coherent criterion with respect to the key variables t and . The idea
of fixing a predetermined confidence level does not appear feasi-
ble, since a product may not even achieve the fixed level depending
on the values taken by the set of parameters that characterise the
product itself.
Moreover, as proved by the following proposition and corollary,
the study of the sensitivities to assess the possibility of determining
the recommended time horizon as the time that maximises the prob-
ability of the cost-recovery event (in the strong sense) does not yield
positive results in terms of a coherent, or at least useful, pattern.

Proposition 4.5. The following explicit characterisation for the first


partial derivative of the cumulative probability, as expressed in the
right-hand side of Equation 4.5, with respect to t holds

 
(r rc 12 2 )t ln[NC /S0 ]
N
t t
 1 
(r rc 2 2 )t ln[NC /S0 ]
= N
t
 
r rc 12 2 ln[NC /S0 ]
+ (4.16)
2 t 2 t 3

where N [] denotes the probability density of a standard normal


random variable.

Proof By direct calculation

 1 
(r rc 2 2 )t ln[NC /S0 ]
N
t t
 
(r rc 12 2 )t ln[NC /S0 ]
= N
t
 1 
(r rc 2 2 )t ln[NC /S0 ]

t t
 1 2 
(r rc 2 )t ln[NC /S0 ]
= N
t
 
r rc ln[NC /S0 ]
+ (4.17)
2 t 4 t 2 t 3

170

i i

i i
i i

minenna 2011/9/5 13:53 page 171 #203


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Eventually, it follows that


 
(r rc 12 2 )t ln[NC /S0 ]
N
t t
 

(r rc 12 2 )t ln[NC /S0 ]
=N
t
 
r rc 12 2 ln[NC /S0 ]
+ (4.16)
2 t 2 t 3

Corollary 4.6. The sign of the first partial derivative of the cumula-
tive probability, as expressed in the right-hand side of Equation 4.5,
with respect to t is characterised as follows
 
(r rc 12 2 )t ln[NC /S0 ]
N >0
t t
for (r rc 12 2 )  0 (4.18)

 1 
(r rc 2 2 )t ln[NC /S0 ]
N >0
t t
ln[NC /S0 ]
for r rc 12 2 < 0, t < (4.19)
r rc 12 2
 1 
(r rc 2 2 )t ln[NC /S0 ]
N 0
t t
ln[NC /S0 ]
for r rc 12 2 < 0, t  (4.20)
r rc 12 2

Proof Expression 4.16 is recovered, ie


 1 
(r rc 2 2 )t ln[NC /S0 ]
N
t t
 
(r rc 12 2 )t ln[NC /S0 ]
= N
t
 
r rc 12 2 ) ln[NC /S0 ]
+ (4.16)
2 t 2 t3
Since
 
(r rc 12 2 )t ln[NC /S0 ]
N >0
t

171

i i

i i
i i

minenna 2011/9/5 13:53 page 172 #204


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

by construction, the sign of the above expression is determined by


the sign of the following quantity
r rc 12 2 ln[NC /S0 ]
+ (4.21)
2 t 2 t 3

factorising 1/2 t > 0, the quantity to study becomes
ln[NC /S0 ]
r rc 12 2 + (4.22)
t
so that
 
(r rc 21 2 )t ln[NC /S0 ]
N >0
t t
ln[NC /S0 ]
if r rc 12 2 + >0 (4.23)
t
 
(r rc 21 2 )t ln[NC /S0 ]
N 0
t t
ln[NC /S0 ]
if r rc 12 2 + 0 (4.24)
t
Since by definition S0 < NC and, thus, ln[NC /S0 ] > 0, then
the quantity (ln[NC /S0 ])/t is always greater than zero. So, if (r
rc 12 2 )  0, this suffices to show that the first partial derivative is
always positive for every t.
If r rc 21 2 < 0, then the sign of the overall quantity 4.16
depends also on t.
Hence,
ln[NC /S0 ]
r rc 12 2 + >0
t
if
ln[NC /S0 ]
r rc 12 2 < 0, t<
r rc 12 2
Conversely, it holds that
ln[NC /S0 ]
r rc 12 2 + 0
t
if
ln[NC /S0 ]
r rc 12 2 < 0, t
r rc 12 2

172

i i

i i
i i

minenna 2011/9/5 13:53 page 173 #205


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Corollary 4.6 confirms the non-coherent behaviour of the strong


characterisation with respect to both time and volatility. In fact,
where (r rc 12 2 )  0 the probability is monotonically increasing
with respect to the time so that a maximum does not exist; conversely
if r rc 12 2 < 0, a time that maximises the probability of cost recov-
ery in the strong sense does exist, but tends to decrease quickly
to negligible levels when the volatility raises, and this behaviour
is inconsistent with the principle of monotonicity with respect to
volatility (principle 2) stated in Section 4.1.
Recalling also the above-mentioned theoretical drawback and,
hence, the blindness of the strong characterisation with respect to
principle 3 of Section 4.1, it is clear that this characterisation does
not model the event of cost recovery in a way that allows the gen-
eral requirements for the recommended investment time horizon to
be met.

4.1.2 The weak characterisation of the cost-recovery event


This section presents an alternative probabilistic characterisation of
the cost-recovery event that will be proved to be compatible with the
general requirements expressed by the three principles in Section 4.1.
Also this alternative characterisation will rule out approaches that
try to determine the recommended time horizon by fixing a target
probability level to be attained. However, unlike the strong charac-
terisation, it will allow a suitable methodology to be set up where
the recommended time horizon is the minimum time within which
the product will recover all the costs incurred (both initial and run-
ning) with a well-defined probability. Moreover, for a given regime
of costs, both the minimum time and the associated probability will
depend only on the volatility of the product according to a relation
of direct proportionality, hence ensuring the compliance with the
fundamental principle more volatility, more time.
The first step towards such results requires the qualification of the
event of cost recovery with respect to any given time t by consid-
ering all the times in the period [0, t] in which an immediate sale
would have realised a full recovery of the issue price. Technically,
this requires a proper identification of the points in time at which
the stochastic process {St }t0 has been equal to the cost barrier NC
over the period considered. If, as seems reasonable, the interest is
in the shortest possible time for achieving a full cost recovery, then

173

i i

i i
i i

minenna 2011/9/5 13:53 page 174 #206


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

the mathematical concept that correctly captures this phenomenon


is the first-passage time of the stochastic process of the products
value for the barrier NC.
In formal terms

S,NC = inf {s  0 : Ss  NC} (4.25)

where NC is the barrier level that signals the cost-recovery event.


Obviously the first-passage time S,NC is a stochastic variable char-
acterised by a specific probability distribution that depends on the
original distributive properties of the process {St }t0 . Then, for any
time t it is always possible to determine the probability of the event
the product recovers the initial and running costs at least once in
the finite time interval [0, t], that is, the following probability

Q(S,NC  t) (4.26)

It is important to observe that the probability 4.26 does not mean


that a corresponding percentage of the trajectories of the process
{St }t0 at time t will be above or equal to the cost-recovery barrier;
in this sense, with respect to the more immediate characterisation
as given by Equation 4.4, the above characterisation is less stringent
(ie, weak) since it requires the fulfilment of a condition at least once
in a time interval and not at a precise point in time.
Nevertheless, it captures very useful information for an investor
interested in an exit strategy based on the cost recovery and it is also
backed by an intrinsically prudential logic. Imagine a product for
which the probability of the event {S,NC  1} is equal to 95%. It
is clear that at the end of the first year spent in the product there
is a high probability that the investor will have amortised the costs
incurred at least once. If the investor does not exploit the chance to
exit from the product and recover the costs, the sole fact of having
reached the barrier (equal to the price paid at inception) at least
once gives them a higher probability of amortising the costs again
in less than one year. This property follows from the fact that {St }t0
is a Markov process and hence, by definition, its dynamics are not
affected by the past history, but depend only on the value reached
by the process at that date.
Before focusing on the weak characterisation of the cost-recovery
event, it is worth noting that is has a strict relationship with the strong
characterisation seen in the previous section. When the volatility of

174

i i

i i
i i

minenna 2011/9/5 13:53 page 175 #207


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

the product is exactly 0, as in a standard cash account, the two events


are identical at any time; in fact, all the products trajectories are
deterministic and, thus, there exists a unique time, say t , in which
the break-even of the costs will happen with probability 1. Before
time t , both the strong and the weak characterisation will return
a probability equal to 0; for every time t > t the two expressions
will indicate that the cost recovery is a certain event. For volatilities
positive but reasonably low, the two events begin to differ but a
strict connection remains: due to the low inherent volatility of the
product, it would be difficult for a trajectory that has just recovered
the costs in a time t < t to perform very badly and to reach a level
below the barrier NC at time t. In fact, in these cases, the probabilities
returned by the two alternative characterisations in correspondence
of different values of t are very close each other. When volatility
increased, the chances of hitting the barrier and then concluding the
trajectory at a level St < NC become more and more likely and the
two events begin to describe very different situations: typically, for
increasing , at a fixed level of probability targeted for the weak
event it will correspond to a decreasing probability of cost recovery
resulting from the application of Equation 4.4.
It could be argued that the event measured by the probability
in Equation 4.26 tends to lose significance for high volatility levels,
ie, to become weaker; this argument is only correct if the concept of
cost recovery in the strong sense is adhered to and, consequently, the
attempt to determine the recommended time horizon is performed
by keeping fixed at a given level the probability of the event the
product recovers the initial and running costs at least once in a finite
time interval.
However, as explained further below, this is not the correct
approach under the weak characterisation. In fact, this characteri-
sation is easily suited to an approach where the recommended time
horizon and the significance of the cost-recovery event evaluated at
a such time horizon vary depending on the volatility of the products.
In order to understand this point it must be observed that, by def-
inition, Equation 4.26 is the cumulative probability function of the
first-passage times described by Equation 4.25. It follows that, unlike
what happens for the strong characterisation in Equation 4.4, this
cumulative probability is a monotonic function of the time for every
set of costs, interest rates and volatilities. Therefore, the cumulative

175

i i

i i
i i

minenna 2011/9/5 13:53 page 176 #208


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.2 Plot of the function Q(S,NC  t) with respect to the time t
(ic = 2%, r rc = 3.5%)

1.0
0.9
= 5.15%
0.8
Q(S,NC t) 0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
t (years)

distribution function of the first hitting times associates a unique


time to any probability value (and vice versa), as shown by Figure 4.2
for a fixed volatility level.
Thanks to this monotonic relationship, the determination of the
suitable investment time horizon for any product becomes closely
related to the problem of correctly identifying on the cumula-
tive distribution function associated with that specific product a
proper probability level for the event of cost recovery as stated
in Equation 4.25.
To this end, the analytical study of the cumulative distribution
function of the first hitting times is necessary, especially in order
to correctly assess how it is affected by the volatility and by the
interplay between this variable and the drift of the stochastic process
{St }t0 .
In this context Section 4.1.3 will be devoted to deriving a closed
explicit formula for the probability represented in Equation 4.26,
given the solution in Equation 4.1 of the stochastic differential equa-
tion that, according to Definition 4.1, describes the dynamics of the
value of the non-equity product over time. At this stage of the analy-
sis it can already be guessed that, for a given cost regime, the sought
formula will depend on two crucial variables, namely the volatil-
ity and the time. In fact, after having determined the aforemen-
tioned closed formula, further investigations on it will be required.

176

i i

i i
i i

minenna 2011/9/5 13:53 page 177 #209


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Some useful results arise from the asymptotic analysis delivered


in Section 4.1.4. By exploiting these results, in Section 4.1.5 a sen-
sitivity analysis will be performed in order to complete the set of
analytical results required to state the key theorems of existence and
uniqueness of the minimum time horizon and of the associated min-
imum probability of the cost-recovery event. These theorems will be
derived at first locally in Section 4.1.6 and, then, after a proper study
of the so-called function of the minimum times (in Section 4.1.7),
globally in Section 4.1.8.
The extension of these results to a discrete volatility setting will
be presented in Section 4.1.9.
Section 4.1.10 will explore the research of the minimum time hori-
zon under more general assumptions for the dynamics of the prod-
uct than those made in Definition 4.1. It will be proved that the
results obtained in the framework with constant coefficients can be
extended to the case of a time-varying deterministic drift that helps
to cover conditions more representative of the reality of the markets;
although for some particular cases it is still possible to have ana-
lytical representations, in this framework it is usually necessary to
make use of numerical methods. More general dynamics that embed
stochastic volatilities and stochastic interest rates will be treated syn-
thetically, showing that in the former case the key principles are all
valid and the only technical difficulties are computational due to
the need to use advanced Monte Carlo simulations; in the latter case
it will be clarified that the volatility of interest rates is not a rele-
vant variable in the determination of the minimum recommended
time horizon and so the modelling of a stochastic drift can be safely
ignored and brought back to the deterministic case.
Finally, Section 4.1.11 will offer some technical remarks for a suit-
able choice of the discretisation time step when the determination of
the minimum recommended time horizon is performed in discrete
time. Prior to this section the entire analysis will be carried out in
continuous time. In particular, both the paths of the stochastic pro-
cess {St }t0 and the detection of the first hitting times of the cost-
recovery barrier will be performed on an infinitesimal time basis.
This setting does not affect the founding principles of the methodol-
ogy, but, as shown in Section 4.1.11, has a significant impact on the
final results that can be obtained with respect to a numerical imple-
mentation in discrete time via Monte Carlo simulations. The last part

177

i i

i i
i i

minenna 2011/9/5 13:53 page 178 #210


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

of this section will provide some useful practical advice for building
a procedure aimed at determining the minimum recommended time
horizon in a discrete-timediscrete-volatility environment.

4.1.3 The closed formula for the cumulative probability of the


first-passage times
This section presents the analytical derivation of the closed formula
for the probability 4.26 of the cost-recovery event intended in the
weak sense explained in Section 4.1.2.
This closed formula relies on the standard theory of the first-
passage times of a stochastic process for a given barrier and it will
be determined according to the following outline.
Section 4.1.3.1 presents the closed formula for the probabil-
ity of the first-passage times of a standard Brownian motion
(Karatzas and Shreve 2005) for a constant barrier and under
a generic probability measure P. This section also provides a
corollary which will be usefully exploited in the next steps of
the derivation of the sought formula.
Section 4.1.3.2 presents the closed formula for the probability
of the first-passage times of an arithmetic Brownian motion,
with constant drift and diffusion, for a constant barrier and
under the probability measure P obtained from P by taking a
proper RadonNikodm derivative (Bjrk 2009).
Section 4.1.3.3 presents the closed formula for the probability
of the first-passage times of a geometric Brownian motion for
a constant barrier and still under the probability measure P.
Section 4.1.3.4 concludes the analytical derivation by applying
the results of Section 4.1.3.3 to the geometric Brownian motion
that, according to Definition 4.1, describes the dynamics of
the value of the product under the risk-neutral probability
measure Q.

4.1.3.1 The case of the standard Brownian motion


Let {Wt }t0 be a standard Brownian motion on the probability space
( , , P).
Definition 4.7. Let y > 0. The first-passage time of the standard
Brownian motion for the barrier y is defined as
W,y = inf {s  0 : Ws  y} (4.27)

178

i i

i i
i i

minenna 2011/9/5 13:53 page 179 #211


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Remark 4.8. From the previous definition and from the continuity
of the trajectories of the Brownian motion, the following equality
holds
WW,y = y (4.28)
Definition 4.9. The process of the maximum of a standard Brownian
motion is defined as
Mt = max Ws (4.29)
s[0,t]

Proposition 4.10. The following equivalence holds


{W,y  t} = {Mt  y} (4.30)
Proposition 4.11. Let y  0 and x  y. Then the joint probabil-
ity distribution of the processes Wt and Mt satisfies the following
equality
P(Wt  x, Mt  y) = P(2y Wt  x, Mt  y) (4.31)

Proof By using Expressions 4.30 and 4.28 and by definition of condi-


tional probability, the probability of the joint event P(Wt  x, Mt 
y) can be expressed as
P(Wt  x, Mt  y) = P(Wt  x, W,y  t)
= P(Wt WW,y  x y, W,y  t)
= P(Wt WW,y  x y | W,y  t)P(W,y  t)
By the strong Markov property and the symmetry of increments
of Brownian motion (Karatzas and Shreve 2005)
P(Wt WW,y  x y | W,y  t) = P((Wt WW,y )  x y | W,y  t)
and hence
P(Wt  x, Mt  y) = P((Wt WW,y )  x y | W,y  t)P(W,y  t)
Again, by the definition of conditional probability and using Expres-
sions 4.28 and 4.30 it follows that
P(Wt  x, Mt  y) = P((Wt WW,y )  x y, W,y  t)
= P(Wt + y  x y, W,y  t)
= P(Wt + y  x y, Mt  y)
and finally, by simplifying
P(Wt  x, Mt  y) = P(2y Wt  x, Mt  y) (4.31)

179

i i

i i
i i

minenna 2011/9/5 13:53 page 180 #212


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

From the previous proposition it is possible to state a corollary


that will be used in Section 4.1.3.2.
Corollary 4.12. Let g : R R be a continuous function and let y  0.
Then the following equality holds
 
g(Wt ) dP = g(2y Wt ) dP (4.32)
{Wt y, Mt y} {2yWt y, Mt y}

Proof Let t be fixed, and let Z1 = Wt and Z2 = 2y Wt . From


Proposition 4.11 it follows that

P(Z1  x, Mt  y) = P(Z2  x, Mt  y) for all x  y (4.33)

and therefore it is
 
g(Z1 ) dP = g(Z2 ) dP (4.34)
{Z1 y, Mt y} {Z2 y, Mt y}

Proposition 4.13. Let y  0 and t > 0. The probability of the event


{W,y  t} is given by
 
y
P(W,y  t) = 2N (4.35)
t
Proof From Equation 4.30 it follows that

P(W,y  t) = P(Mt  y) (4.36)

The event {Mt  y} is the disjoint union of the two events {Wt 
y, Mt  y} and {Wt > y, Mt  y}. Hence,

P(Mt  y) = P(Wt  y, Mt  y) + P(Wt > y, Mt  y)

which, by using Equation 4.36, can be re-expressed as

P(W,y  t) = P(Wt  y, Mt  y) + P(Wt > y, Mt  y) (4.37)

Using Equation 4.31 with x = y, it follows that

P(Wt  y, Mt  y) = P(2y Wt  y, Mt  y)

and after a little algebra

P(Wt  y, Mt  y) = P(Wt  y, Mt  y) (4.38)

By substituting Equation 4.38 into Equation 4.37, it follows that

P(W,y  t) = P(Wt  y, Mt  y) + P(Wt > y, Mt  y)

180

i i

i i
i i

minenna 2011/9/5 13:53 page 181 #213


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Since Wt  Mt it follows that

{Wt  y} {Mt  y} and {Wt > y} {Mt > y}

Hence
P(W,y  t) = P(Wt  y) + P(Wt > y)
As P(Wt = y) = 0, the previous equation becomes

P(W,y  t) = 2P(Wt  y) (4.39)

Recalling that Wt N (0, t), the thesis follows, ie


 
y
P(W,y  t) = 2N (4.35)
t

4.1.3.2 The case of the arithmetic Brownian motion


Let {Yt }t0 be the arithmetic Brownian motion on the probability
space ( , , P) such that

Yt = t + WtP (4.40)

where > 0 and {WtP }t0 is the standard Brownian motion on the
same probability space.
Definition 4.14. Let y > 0. The first-passage time of the arithmetic
Brownian motion for the barrier y is defined as

Y,y = inf {s  0 : Ys  y} (4.41)

In order to obtain the closed formula for the probability P(Y,y 


t), it is first necessary to study the normalised arithmetic Brownian
motion associated with {Yt }t0 .
Definition 4.15. The normalised arithmetic Brownian motion asso-
ciated with {Yt }t0 is the process {Yt }t0 such that
Yt
Yt = (4.42)

or, equivalently

Yt = t + WtP (4.43)

where

= (4.44)

181

i i

i i
i i

minenna 2011/9/5 13:53 page 182 #214


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Definition 4.16. Let y > 0. The first-passage time of the normalised


arithmetic Brownian motion for the barrier y is defined as
Y,y = inf {s  0 : Ys  y} (4.45)
Definition 4.17. The process of the maximum of the normalised
arithmetic Brownian motion is defined as
Mt = max Ys (4.46)
s[0,t]

Proposition 4.18. The following equivalence holds


{Y,y  t} = {Mt  y} (4.47)
Proposition 4.19. Let y  0. Then the following equality holds
 
t y
P(Yt  y, Mt  y) = e2y N (4.48)
t
Proof Let P be the probability measure obtained from P by the taking
the following RadonNikodm derivative
dP
= exp( WtP 21 t 2 ) (4.49)
dP
From Girsanovs Theorem (Bjrk 2009) it follows that under P the
process Yt = t + WtP is a standard Brownian motion. Moreover, it
holds that
dP
= exp( Yt 12 t 2 ) (4.50)
dP
By using Equation 4.50 the probability P(Yt  y, Mt  y) which by
definition is equal to

P(Yt  y, Mt  y) = dP
{Yt y, Mt y}

can be re-expressed as

P(Yt  y, Mt  y) = exp( Yt 12 t 2 ) dP
{Yt y, Mt y}

As Yt is a standard Brownian motion under P, then, by Corollary 4.12,


setting g(x) = exp( x 12 t 2 ) yields

P(Yt  y, Mt  y) = exp((2y Yt ) 12 t 2 ) dP
{2yYt y, Mt y}

and after a little algebra



P(Yt  y, Mt  y) = e 2 y
exp( Yt 12 t 2 ) dP
{Yt y,Mt y}

182

i i

i i
i i

minenna 2011/9/5 13:53 page 183 #215


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Since Yt  Mt , it follows that {Yt  y} {Mt  y} and hence



P(Yt  y, Mt  y) = e2y exp( Yt 12 t 2 ) dP (4.51)
{Yt y}

By the symmetry of the standard Brownian motion Yt , it is possible


to substitute Yt with Yt on the right-hand side of Equation 4.51,
leading to

P(Yt  y, Mt  y) = e 2 y
exp( Yt 12 t 2 ) dP
{Yt y}

Coming back to the probability measure P, through Equation 4.50


the above equality becomes

P(Yt  y, Mt  y) = e 2 y
dP
{Yt y}

and after a little algebra

P(Yt  y, Mt  y) = e2y P(Yt  y) (4.52)

and, by using the explicit expression for Yt given by Equation 4.43,


it follows that

P(Yt  y, Mt  y) = e2y P( t + WtP  y)


= e2y P(WtP  t y)

Finally, recalling that WtP N (0, t), it easily follows that


 
t y
P(Yt  y, Mt  y) = e2y N (4.48)
t

Proposition 4.20. Let y  0 and t > 0. The probability of the event


{Y,y  t} is given by
   
t y t y
P(Y,y  t) = N + e2y N (4.53)
t t
Proof From Equation 4.47 it follows that

P(Y,y  t) = P(Mt  y) (4.54)

The event {Mt  y} is the disjoint union of the two events {Yt 
y, Mt  y} and {Yt > y, Mt  y}. Hence,

P(Mt  y) = P(Yt  y, Mt  y) + P(Yt > y, Mt  y)

183

i i

i i
i i

minenna 2011/9/5 13:53 page 184 #216


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

which, by using Equation 4.54 can be re-expressed as

P(Y,y  t) = P(Yt  y, Mt  y) + P(Yt > y, Mt  y) (4.55)

Since Yt  Mt it follows that {Yt > y} {Mt  y}. Hence

P(Yt > y, Mt  y) = P(Yt > y)

which, by Equation 4.43, is equivalent to

P(Yt > y, Mt  y) = P(WtP > t + y)

and from WtP N (0, t) it follows that


 
t y
P(Yt > y, Mt  y) = N (4.56)
t
Finally, substituting Equation 4.48 and Equation 4.56 into the
right-hand side of Equation 4.55 yields
   
t y t y
P(Y,y  t) = N + e2y N (4.53)
t t

This latter proposition gives the closed formula of the cumulated


probability of the first-passage times for the normalised process
{Yt }t0 . At this point, it is possible to use this result to come back to
the process {Yt }t0 and obtain for this process the closed formula of
interest as proved by the following proposition.
Proposition 4.21. Let y  0 and t > 0. The probability of the event
{Y,y  t} is given by
     
t y 2 y t y
P(Y,y  t) = N + exp N (4.57)
t 2 t
Proof From
Yt
Yt = (4.42)

it follows that
y
Yt  y Yt 

so that by Equations 4.41 and 4.45 it follows that

Y,y = Y,y/ (4.58)

and hence
P(Y,y  t) = P(Y,y/  t) (4.59)

184

i i

i i
i i

minenna 2011/9/5 13:53 page 185 #217


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

A proper application of Equation 4.53 gives


     
t y/ y t y/
P(Y,y/  t) = N + exp 2 N
t t
(4.60)
Combining Equations 4.59 and 4.60 yields
     
t y/ y t y/
P(Y,y  t) = N + exp 2 N
t t
which, by using Equation 4.44, can be re-expressed as
  
1 y
P(Y,y  t) = N t
t
    
y 1 y
+ exp 2 N t
t
and, after a little algebra it follows that
     
t y 2 y t y
P(Y,y  t) = N + exp N (4.57)
t 2 t

4.1.3.3 The case of the geometric Brownian motion


Let {Xt }t0 be the geometric Brownian motion on the probability
space ( , , P) described by the following stochastic differential
equation
dXt = Xt dt + Xt dWtP (4.61)
with initial condition
X 0 = x0 (4.62)
where x0 > 0 and {WtP }t0 is the standard Brownian motion on the
same probability space.
Definition 4.22. Let b > x0 . The first-passage time of the geometric
Brownian motion for the barrier b is defined as
X,b = inf {s  0 : Xs  b} (4.63)
Proposition 4.23. The probability of the event {X,b  t} is given by
 
( 12 2 )t ln(b/x0 )
P(X,b  t) = N
t
 
2( 12 2 ) ln(b/x0 )
+ exp
2
 
( 12 2 )t ln(b/x0 )
N (4.64)
t

185

i i

i i
i i

minenna 2011/9/5 13:53 page 186 #218


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Proof Let {Yt }t0 be the process defined as


Xt
Yt = ln (4.65)
x0
then, recalling the stochastic differential equation 4.61 and the asso-
ciated initial condition in Equation 4.62 and applying Its Lemma,
it holds that
Yt = t + WtP (4.66)
where
1
= 2 2 (4.67)
With regard to the arithmetic Brownian motion Yt , consider the
barrier y > 0 defined as
b
y = ln (4.68)
x0
Recalling Equation 4.57 and properly substituting through Equa-
tions 4.67 and 4.68, it follows that
 
( 12 2 )t ln(b/x0 )
P(Y,y  t) = N
t
 
2( 12 2 ) ln(b/x0 )
+ exp
2
 
( 12 2 )t ln(b/x0 )
N (4.69)
t
By Equations 4.65 and 4.68
Yt  y Xt  b
so that
Y,y = X,b
and therefore
P(Y,y  t) = P(X,b  t) (4.70)
Finally, Equations 4.69 and 4.70 lead to
 1 
( 2 2 )t ln(b/x0 )
P(X,b  t) = N
t
 
2( 12 2 ) ln(b/x0 )
+ exp
2
 1 
( 2 2 )t ln(b/x0 )
N (4.64)
t

186

i i

i i
i i

minenna 2011/9/5 13:53 page 187 #219


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

4.1.3.4 The case of the geometric Brownian motion specific to


the product
The closed formula of the cumulative probability distribution of
the first times when the stochastic process {St }t0 hits the barrier
NC (representative of the cost recovery) is immediately obtained by
applying Equation 4.64 to the specific geometric Brownian motion
that describes the dynamics of the products value over time.
Formally, the following proposition holds.

Proposition 4.24. Let {St }t0 be the stochastic process of the prod-
ucts value which, according to according to Definition 4.1, is
described by the following stochastic differential equation under
the risk-neutral probability measure Q

dSt = (r rc)St dt + St dWt (4.1)

with the initial condition

S0 = NC ic (4.2)

Then, the closed formula for the cumulative probability of the first
hitting times of this process for the barrier NC is given by

 1 
(r rc 2 2 )t ln[NC /S0 ]
Q(S,NC  t) = N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 
(r rc 12 2 )t ln[NC /S0 ]
N (4.71)
t

4.1.4 Asymptotic analysis


The formula 4.71 is a multivariate functional that depends on many
parameters: the issue price NC (that also identifies the barrier), the
initial costs, the running costs, the interest rate, the volatility and the
time. It is reasonable to assume that the issue price, the cost structure
and the level of the risk-free interest rate are exogenous variables and
to focus analysis on the effects of both the volatility and the time on
the behaviour of the cumulative probability distribution function of
the hitting times.

187

i i

i i
i i

minenna 2011/9/5 13:53 page 188 #220


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Therefore, the probability Q(S,NC  t) is considered as a bivariate


function P( , t), ie
 
(r rc 12 2 )t ln[NC /S0 ]
P( , t) = N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 
(r rc 2 2 )t ln[NC /S0 ]
N (4.72)
t
In the context of building an optimal criterion for the selection of
the minimum time horizon and of the associated probability (also
called confidence level), the initial analysis performed in this sec-
tion aims at investigating the asymptotic behaviour of the function
P( , t) when t and become very large, studying, in other words,
the tails of this bivariate function.
Not surprisingly, this analysis will reveal the deep influence of the
interplay between drift and volatility, as already seen in Section 4.1.1
with regard to the strong characterisation of the cost-recovery event.
Also, in the weak probabilistic framework, the quantity (r rc 12 2 )
entails a heterogeneous behaviour of the asymptotic limit of P( , t)
as the time tends to infinity, although not so dichotomous as that
proved by Proposition 4.4. In fact, in the weak characterisation, a
negative value of this quantity will bring below 1 the maximum
probability asymptotically attainable for the cost-recovery event.
The implications are clear: depending on the relative magnitude of
the drift and the volatility, some products have precluded a priori
the possibility of offsetting the costs incurred no matter how long
the investment horizon. In other words, for some products the set
of possible confidence levels has an upper bound that is less than
1 and decreases as volatility grows, although it has a precise floor
linked to the amount of the initial costs.
The following propositions formalise these findings.

Proposition 4.25. The asymptotic limit of the function P( , t) with


respect to t is given by


for (r rc 12 2 )  0
1
 
lim P( , t) = NC (2(rrc)/ )1
2
(4.73)
t
for (r rc 12 2 ) < 0
S0

188

i i

i i
i i

minenna 2011/9/5 13:53 page 189 #221


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Proof Recall Equation 4.72


 1 
(r rc 2 2 )t ln[NC /S0 ]
P( , t) = N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 
(r rc 12 2 )t ln[NC /S0 ]
N
t

Then, for r rc 21 2 > 0

lim P( , t)
t
  
(r rc 12 2 )t ln[NC /S0 ]
= lim N
t t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1 2 
2(r rc 2 ) ln[NC /S0 ]
= N () + exp N () (4.74)
2
By using the asymptotic properties of the standard normal vari-
able
lim P( , t) = 1 for r rc 12 2 > 0
t

Similarly, for r rc 12 2 = 0
  
(r rc 12 2 )t ln[NC /S0 ]
lim P( , t) = lim N
t t t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 
(r rc 12 2 )t ln[NC /S0 ]
N
t
= N (0) + e0 N (0) (4.75)
1
Hence, recalling that N (0) = 2

lim P( , t) = 1 for r rc 12 2 = 0
t

189

i i

i i
i i

minenna 2011/9/5 13:53 page 190 #222


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Conversely, for r rc 12 2 < 0


  
(r rc 12 2 )t ln[NC /S0 ]
lim P( , t) = lim N
t t t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
= N () + exp N ()
2
(4.76)
By again using the asymptotic properties of the standard normal
variable, after a little algebra, it eventually follows that
 
NC (2(rrc)/ )1
2

lim P( , t) = for r rc 12 2 < 0


t S0

The previous proposition highlights a fundamental asymptotic


property of the cumulative distribution density of the first-passage
times: the event of a sure cost recovery is or is not feasible depending
on the relationship between costs, interest rates and volatilities.
As mentioned above, this relationship is represented by the quan-
tity (r rc 12 2 ) and identifies a clear regime shift in the behaviour
of the function P( , t); for high levels of volatility and low levels of
drift (that can be obtained using a low risk-free interest rate or high
running costs) it is r rc 12 2 < 0 and, thus, even for an arbitrarily
large time t the cost recovery cannot be attained with probability 1.
Therefore, there exists an asymptotic limit that will represent the
maximum attainable probability with a given set of values taken by
the variables involved. Moreover, by looking at the limit in Equa-
tion 4.73 it is clear that the maximum attainable probability will
decrease as the volatility increases. Intuitively, this behaviour is due
to the fact that as volatility increases the trajectories of the process
{St }t0 that perform very badly are more likely to occur, and for this
reason there will exist a growing percentage of trajectories that will
never hit the cost-recovery barrier.
The next proposition completes the asymptotic study of the func-
tion P( , t), considering its limit when the volatility levels become
arbitrarily large.

190

i i

i i
i i

minenna 2011/9/5 13:53 page 191 #223


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Proposition 4.26. The asymptotic limit of the function P( , t) with


respect to is given by
S0
lim P( , t) = (4.77)
NC
Proof Recall Expression 4.72
 
(r rc 21 2 )t ln[NC /S0 ]
P( , t) = N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 
(r rc 2 2 )t ln[NC /S0 ]
N
t
By direct calculation

lim P( , t)

  1 
(r rc 2 2 )t ln[NC /S0 ]
= lim N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
= N () + lim e ln[NC /S0 ] N () (4.78)

By using the asymptotic properties of the standard normal


variable, after a little algebra it eventually follows that
S0
lim P( , t) = (4.77)
NC

The above result provides valuable information about the be-


haviour of the studied cumulative probability distribution: the
declining asymptote with respect to volatility described in Proposi-
tion 4.25 cannot exceed the value S0 / NC, even for arbitrarily large
levels of volatility; strictly speaking, it is the level of the initial costs
that controls the maximum probability achievable when tends to
infinity. It is worth noting that the limit 4.77 is valid for any t, thus
implying that the function P( , t) reaches its asymptote with respect
to the volatility in an infinitesimal time; this phenomenon implicitly

191

i i

i i
i i

minenna 2011/9/5 13:53 page 192 #224


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.3 Plot of the function P( , t) with respect to the time t


(ic = 2%, r rc = 3.5%).

1.00
0.99
0.98
0.97 = 0.15%
0.96 = 2.65%
= 5.15%
P(,t)

0.95 = 10.15%
0.94 = 22.65%
= 35.15%
0.93
= 92.65%
0.92
0.91
0.90
0 10 20 30 40 50 60
t (years)

Table 4.1 Maximum attainable probability for the cost recovery event
(r rc = 1.5%)

ic (%)
 
(%) 1 2 3

1.6 1 1 1
4 1 1 1
10 1 1 1
25 0.9916 0.9832 0.9747
40 0.9906 0.9812 0.9718
90 0.9901 0.9802 0.9704

suggests that for an increasing but finite volatility the cumulative


probability distribution function of the hitting times approaches the
asymptotic value S0 / NC in a time interval [0, t] that is progressively
shrinking. Figure 4.3 illustrates this point.
In order to assess the magnitude of the phenomena highlighted
by the last propositions, Tables 4.1 and 4.2 show some values for
the maximum probability attainable by the cost-recovery event with
respect to different levels of costs and volatility.
Calculating the general formula 4.73 for the maximum attainable
probability when t tends to infinity and r rc 12 2 < 0, it is now pos-
sible to move to the asymptotic analysis of the first partial derivative

192

i i

i i
i i

minenna 2011/9/5 13:53 page 193 #225


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Table 4.2 Maximum attainable probability for the cost recovery event
(r rc = 0.5%)

ic (%)
 
(%) 1 2 3

1.6 1 1 1
4 1 1 1
10 1 1 1
25 0.9948 0.9895 0.9843
40 0.9919 0.9837 0.9756
90 0.9904 0.9807 0.9711

of P( , t) with respect to the volatility. The following theorem and, in


particular, the subsequent corollary will unveil the determinant role
that the drift of the process {St }t0 plays in asymptotically affecting
the sign of the partial derivative considered. This will give another
key contribution towards the identification of a suitable criterion for
determining the minimum recommended time horizon and the cor-
responding confidence level consistently with the principle more
volatility, more time.
Theorem 4.27. Let r rc 12 2 < 0. For t , the asymptotic value
of the first partial derivative of P( , t) with respect to is given by
   
NC (2(rrc)/ )1 (r rc)
2
P( , t) NC
lim = 4 ln (4.79)
t S0 3 S0
Proof Since t and r rc 12 2 < 0, by Proposition 4.25 it follows
that
 
2(r rc 12 2 ) ln[NC /S0 ]
lim P( , t) = exp (4.80)
t 2
By direct calculation
P( , t)
lim
t

= [ lim P( , t)]
t
  
2(r rc 12 2 ) ln[NC /S0 ]
= exp
2
 1 2     
2(r rc 2 ) ln[NC /S0 ] NC 2(r rc) 2
= exp ln
2 S0 2

193

i i

i i
i i

minenna 2011/9/5 13:53 page 194 #226


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

     
2(r rc 12 2 ) ln[NC /S0 ] NC 2(r rc)
= exp ln
2 S0 2
 1 2     
2(r rc 2 ) ln[NC /S0 ] NC 2
= exp ln (2 (r rc ))
2 S0 3
  (2(rrc 2 /2))/ 2   
4(r rc) NC NC
= exp ln ln
3 S0 S0
which leads to
   
4(r rc) NC (2(rrc)/ )1
2
P( , t) NC
lim = ln (4.79)
t 3 S0 S0

Corollary 4.28. Let (r rc 12 2 ) < 0. The sign of the asymptotic


first partial derivative of P( , t) with respect to is characterised
as follows
P( , t)
lim < 0 if (r rc) > 0



t

(4.81)
P( , t)

lim  0 if (r rc)  0
t
Proof Recalling the Expression 4.79, ie
   
4(r rc) NC (2(rrc)/ )1
2
P( , t) NC
lim = ln
t 3 S0 S0
it is immediate to observe that, as by definition [NC /S0 ] > 1 and
> 0, the quantities [NC /S0 ](2(rrc)/ )1 and ln[NC /S0 ] are nec-
2

essarily greater than 0 and of course 4/ 3 < 0. Accordingly, the


sign of the quantity (r rc) determines the sign of the asymptotic
first partial derivative with respect to , ie

P( , t)
lim < 0 if (r rc) > 0


t

(4.81)
P( , t)
lim  0 if (r rc)  0
t

Corollary 4.28 proves that, conditionally on r rc 12 2 < 0, as


time tends to infinity P( , t) is a decreasing function of the volatility
when the drift is positive, while a negative drift makes the cumula-
tive probability distribution of the first hitting times an increasing
function of .

194

i i

i i
i i

minenna 2011/9/5 13:53 page 195 #227


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

This finding suggests that if (r rc) > 0, there should exist a finite
time by which the first derivative with respect to the volatility is
strictly negative, meaning that an upward movement in the volatility
necessarily diminishes the probability of cost recovery in the weak
sense. In other words, a positive drift leads to P( , t)/ < 0;
therefore, for any fixed probability level, a product characterised by
a high volatility should need more time to reach that confidence level
with respect to a low-volatility product.
Intuitively, this recalls precisely the principle of direct proportion-
ality between the volatility of the product and the corresponding
minimum investment time horizon (more volatility, more time)
stated in Section 4.1. Hence, the open set of times that satisfy this
principle (ie, the times where P( , t)/ < 0) may be properly
defined as an admissible region of times for the research into the
minimum recommended time horizon.
Continuing with this line of reasoning, with a negative drift it is
likely that the condition P( , t)/ < 0 will never be satisfied,
although at this stage of analysis this is not a rigorous proof, but
rather still an intuition, since much of the behaviour of the cumula-
tive probability function when t and assume finite values is not
known. It can be considered that if the condition P( , t)/ < 0
is not satisfied for very large times, it will also not be satisfied for
shorter times. In fact, by observing that a negative drift corresponds
to an expected negative return, intuition suggests that in this condi-
tion an increase in the volatility may only increase the probability of
hitting the barrier of cost recovery at any time (ie, P( , t)/ > 0
for all t > 0). It follows that, for (r rc) < 0, no time will be eli-
gible for the admissible region of times, since the principle more
volatility, more time would be systematically violated.
Moreover, it may be argued that, if every time contained in the
admissible region satisfies by definition the principle just recalled,
then the minimum admissible time can be an efficient choice to solve
the problem of correctly identifying a particular level of confidence
and, obviously, also the investment time horizon to be recommended
to the investors.
In this context, and since the admissible region of times is char-
acterised by the condition P( , t)/ < 0, it must also be verified
whether this condition can occur even in the presence of a positive
drift lower than 12 2 . This verification requires the removal of the

195

i i

i i
i i

minenna 2011/9/5 13:53 page 196 #228


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

assumption r rc 12 2 < 0, which is behind both Theorem 4.27 and


Corollary 4.28.
To this end, the asymptotic analysis will be abandoned (at least
for a while) in favour of the study of the function P( , t) and its
partial derivatives through an appropriate sensitivity analysis. This
analysis, which, in fact, addresses in detail all the insights that have
emerged so far, is carried out next.
For the sake of completeness, the last part of this section shows
the derivation of the asymptotic value of the second-order partial
derivative of P( , t) with respect to the volatility.

4.1.5 Sensitivity analysis


The asymptotic analysis highlighted some fundamental results
for knowledge of the function P( , t) and it also inspired useful
conjectures which can be summarised as follows.

1. Depending on the sign of the quantity (r rc 12 2 ), the


set of the maximum attainable probabilities for the cost-
recovery event is not unique for all products. This implies that
any approach aimed at finding the recommended investment
time horizon by fixing a unique target probability must be
discarded.
2. Asymptotically, when the drift is positive (and the quantity
1
(r rc 2 2 ) is negative), the function P( , t) exhibits a
behaviour consistent with the principle more volatility, more
time. This finding seems to suggest a suitable criterion for
identifying an admissible region of times and, also, a simple
rule for determining the minimum recommended time hori-
zon (ie, the time which satisfies P( , t)/ = 0), and to
ensure a correct ordering of products featuring different levels
of riskiness, the latter expressed in terms of volatility.

In order to properly address these conjectures, a first step is to


abandon the asymptotic framework (at least for a while) and study
the cumulative probability function of the first-passage times when
the time is finite.
In this context, it is important to recall that the first-passage time
S,NC as defined in Equation 4.25, ie

S,NC = inf {s  0 : Ss  NC}

196

i i

i i
i i

minenna 2011/9/5 13:53 page 197 #229


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.4 Average path and first hitting time of two products with
same drift but different volatilities
100 100
(a) (b)
95 95

90 90

85 85

80 80

75 75

0 tmean5 10 15 20 0 tmean5 10 15 20
t (years) t (years)

(a) low , (b) high .

is a random variable which inherits its distributive properties from


those of the stochastic process of the products value {St }t0 . Accord-
ing to Definition 4.1 this process is described by a proper stochastic
differential equation, and, consequently, its dynamics are affected by
two key parameters: drift and volatility. To properly investigate the
role played by each of these two parameters, consider two products
with the same positive drift (r rc) but very different volatilities
denoted by low and high .
Figure 4.4 shows that the two products have the same average
path and also the first time, denoted by tmean , at which this path
hits the cost-recovery barrier NC. Hence, by taking reference times
shorter than tmean both the products will not amortise on average the
costs applied, while at future times longer than tmean the break-even
of the costs will be realised at least on average.
The identity of the average path comes from the fact that this
trajectory depends only on the drift. However, as soon as the entire
set of the trajectories of the stochastic processes of the two products
is considered, things change significantly because of the different
volatility values.
Figure 4.5 clarifies the point by showing (together with the aver-
age path) the trajectories of each product with evidence of the corre-
sponding first hitting times. The upper parts are on the same scale
of Figure 4.4, while a zoom is offered in the lower parts.

197

i i

i i
i i

minenna 2011/9/5 13:53 page 198 #230


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.5 Trajectories and first hitting times of two products with same
drift but different volatilities
(a) (b)
100 100

95 95

90 90

85 85

80 80

75 75
tmean
0 5 10 15 20 0 tmean 5 10 15 20
t (years) t (years)

100 100

95 95

90 90

85 85

80 80

75 75
tmean tmean
0 1 2 3 4 5 0 1 2 3 4 5
t (years) t (years)

(a) low , (b) high .

As is clear from Figure 4.5(a), for the product with low volatility
the first-passage times are substantially concentrated around tmean
(mainly before it). In fact, almost all trajectories take more or less the
same time to reach the barrier and this time is close to that taken by
the average path. On the other hand a high volatility (Figure 4.5(b))
entails a much bigger variability of the trajectories of the product
and, thus, also a different behaviour of the first times at which they
fully recover the costs of the investment. In particular, a significant
number of trajectories touch the barrier in a very short time (this
happens when the high volatility works fast in favour of the cost-
recovery event), while for the remaining trajectories the first-passage
times become less and less frequent (this happens when the high

198

i i

i i
i i

minenna 2011/9/5 13:53 page 199 #231


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.6 Comparison between the probability density functions of the


first hitting times of two products with same drift but different volatilities

4.0 3.5
(a) (b)
3.5 3.0
3.0
2.5
2.5
2.0
x104

x104
2.0
1.5
1.5
1.0
1.0
0.5 0.5

0 0
0tmean 5 10 15 20 0 tmean5 10 15 20
t (years) t (years)

(a) low , (b) high .

volatility leads to a negative trend that translates into a very long


cost recovery time for some paths or an impossible one for others).
Hence, by comparing the two parts of Figure 4.5, it can immedi-
ately be seen that in low-volatility regimes the drift provides a good
summary indication of the first hitting times of all the trajectories
of the product, while for high volatilities the time tmean loses such
representativeness.
This type of interaction between drift and volatility is almost unaf-
fected by the entity of the drift itself. The only difference is that with
a higher drift the average path will amortise the costs in a shorter
time and the trajectories of both products exhibit an overall shift to
the left, suggesting a likely reduction in the minimum investment
time horizon due to the positive contribution of a higher drift. The
opposite is true if positive drifts lower than (r rc) are considered.
The above patterns are reflected in the distributive features of the
random variable S,NC .
Figure 4.6 shows the densities of the two products corresponding
to Figure 4.5.
Most of the occurrences of the density of the low-volatility prod-
uct (Figure 4.6(a)) are associated with times close to tmean , thus con-
firming the strong signalling value of this time which is only related
to the drift of the product. The density of the high-volatility prod-
uct (Figure 4.6(b)) behaves in a completely different way. In fact,

199

i i

i i
i i

minenna 2011/9/5 13:53 page 200 #232


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.7 Comparison between the cumulative probability functions of


the first hitting times of two products with same drift but different
volatilities
1.0 1.0
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
(a) (b)
0 0
0 tmean 5 10 15 20 0 tmean 5 10 15 20
t (years) t (years)

(a) low , (b) high .

most of the probability mass is concentrated in a narrow interval of


times which are close to the issue date and shorter than tmean (for
the immediate positive contribution of the high volatility); then this
probability density has a long right tail that gradually decreases (for
the negative contribution of the high volatility).
Moving now to the cumulative probability distribution of the first-
passage times, Figure 4.7 depicts the function P( , t) for the two
products considered in the Figure 4.6.
Figure 4.7 reconfirms the findings that have emerged so far with
regard to the different strength and meaningfulness of the drift effect
depending on the volatility levels.
When the volatility is low its contribution to the cost recovery
is marginal. In fact, in these cases the drift component has a promi-
nent effect on the ability of the product to amortise the costs incurred.
More specifically, the drift effect is minimal for a certain period since
inception, as shown by the initial substantial flattening of the curve
in part (a) of Figure 4.7; but then the positive drift (only slightly
disturbed by the noise caused by the low volatility) cumulates
enough to lead most of the trajectories to fully recover the costs
at times which are clustered in a small neighbourhood of tmean .
Over this small interval of times the function P( , t) experiences a
very quick increase (ie, when its positive slope becomes very high).

200

i i

i i
i i

minenna 2011/9/5 13:53 page 201 #233


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

After this small period the drift effect is mainly exhausted and the
curve becomes almost flat again but in correspondence of significant
confidence levels.
When the volatility is high the drift effect loses its relevance and
the volatility effect prevails in driving the behaviour of P( , t); as
shown in part (b) of Figure 4.7, the cumulative distribution function
reaches its maximum (positive) slope at the beginning and then it
continues to increase but more and more slowly until it arrives close
to its maximum attainable probability which is lower than the one
corresponding to the low-volatility product. A similar behaviour is
also retrieved when the drift is positive but different from (r rc).
Summarising the above, it is possible to conclude that in low-
volatility environments the drift effect has a key role to discover,
although roughly, the minimum time required to achieve the break-
even of the costs. At the limit, if there is no volatility at all, all the
trajectories of the product are identical to each other and obviously
correspond to the average trajectory. Thus, they will recover the costs
at the same time and with probability 1. However, soon after leaving
this limit case and introducing a positive volatility the drift effect
alone is no longer sufficient to solve the problem of determining the
minimum time horizon for two reasons.
First, even for relatively low volatilities when there is therefore a
strong concentration of the first-passage times around the one corre-
sponding to the average path, the solution of placing the minimum
time horizon equal to tmean proves simplistic because it does not
guarantee a correct ordering of the products in accordance with the
principle more volatility, more time.
Second, when the volatility begins to be quite high, the importance
of the drift effect disappears, and it is therefore necessary to find
another criterion to determine the minimum time horizon.
For these reasons, the methodology that will be detailed in the fol-
lowing sections relies on the direct comparison between the cumu-
lative distribution functions of products with different volatility lev-
els. To understand the intuition behind this methodology, it is use-
ful to consider again the two cumulative probability distributions in
Figure 4.7 but now superimposed on the graph shown in Figure 4.8.
By observing Figure 4.8 it emerges that only for fixed probabil-
ity levels greater than or equal to that identified by the intersec-
tion between the two curves, which is denoted by (t1 , 1 ), does the

201

i i

i i
i i

minenna 2011/9/5 13:53 page 202 #234


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.8 Superimposition of the cumulative probability functions of


the first hitting times of two products with same drift but different
volatilities

1.0
0.9
1 low
0.8 high
0.7
0.6
0.5
0.4
0.3
0.2
0.1
t1
0
0 2 4 6 8 10 12 14 16 18 20
t (years)

Figure 4.9 Superimposition of the cumulative probability functions of


the first hitting times of three products with same drift but different
volatilities

1.0
2
0.9

0.8 1
0.7
0.6
0.5
0.4
0.3
low
0.2 high
0.1 very high
0
0 t1 2 4 t2 6 8 10 12 14 16 18 20
t (years)

product with a higher volatility have an investment time horizon


longer than that associated with the less risky product. In other
words, the point where the two curves intersect identifies the min-
imum confidence level that ensures a correct ordering of the two
products.
Clearly any confidence level above 1 and below the maximum
attainable probability (as qualified by Proposition 4.25) will also

202

i i

i i
i i

minenna 2011/9/5 13:53 page 203 #235


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

adhere to this principle, leaving many degrees of freedom for the


choice of the minimum time horizon and the probability level.
However, by suitably exploiting the concept of the intersec-
tion between cumulative probability distributions corresponding to
different volatility values, this indeterminacy can be overcome.
In this context, Figure 4.9 adds to the two curves of Figure 4.8 the
cumulative probability distribution of a third product with the same
drift and initial costs of the others but with a volatility higher than
high and denoted by very high .
Figure 4.9 indicates that with three products the intersection
(t1 , 1 ) ceases to identify the infimum of the set of confidence levels
eligible for a correct ordering of all the products, but rather it allows
a proper ordering only for the first two products, which are charac-
terised by the volatilities low and high , respectively. In particular,
by picking a probability level in the interval [1 , 2 [, where 2 cor-
responds to the intersection (t2 , 2 ) between the two products with
the highest volatilities, then a situation could arise where the time
horizon of the product with very high is shorter than the time hori-
zon of one or both of the other products. If the interest is in correctly
ordering all the products, then it is clear that the new set of admis-
sible confidence levels starts at 2 > 1 . Hence, it is narrower than
the previous one and also implies that the time horizons associated
with the products necessarily increase.
In general, the greater the number of products considered, the
smaller the set of admissible confidence levels allowing all the prod-
ucts to be correctly ordered becomes, and, for this reason, the time
horizons obtained would become extremely large (eg, of the order of
decades for reasonable costs structures) and, therefore, unpractical.
Ideally, imagining an infinite number of products (which is perfectly
consistent with a setting where the volatility is a continuous variable)
there will be a unique probability value able to properly accommo-
date all the products and, as noted above, this value will also trigger
very long time horizons. Moreover, it should be noted that the idea
of a unique probability being valid for any product, though easy to
understand, is not backed up by any robust theoretical argument;
conversely, given the different significance of the cost-recovery event
in the strong characterisation (see Section 4.1.1) for products with
different volatilities, it appears that a uniform high confidence level
is inherently disadvantageous for less risky products.

203

i i

i i
i i

minenna 2011/9/5 13:53 page 204 #236


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

In the search for a more efficient approach, the next step is to main-
tain the continuous volatility setting and to recover the key concept
of the intersection between cumulative distribution functions, but
according to a different local perspective. Intuitively, this requires
the exploration of the intersection point when the shift between two
consecutive volatility values shrinks to zero. In fact, by intersecting
the cumulative distribution functions of products having these two
consecutive volatilities, the time at which the probability of the cost
recovery is the same for both the products can easily be identified.
From an analytical point of view this is equivalent to determining the
minimum time horizon as being that time at which the first partial
derivative of the function P( , t) with respect to the volatility is zero.
In addition, the function evaluated at this minimum time horizon
returns the minimum confidence level consistent with the need for
a correct ordering of different products. Therefore, all things being
equal, this minimum confidence level is specific for each product
(or, equivalently, for each volatility value).
Following this conjecture, Sections 4.1.5.14.1.5.3 present a rig-
orous sensitivity analysis of the function P( , t) that will lead to
a statement (in Section 4.1.6) of a local theorem of existence and
uniqueness of the minimum time horizon. Then, by studying (in
Section 4.1.7) the behaviour of this minimum time horizon as a func-
tion of the volatility, in Section 4.1.8 strict conditions are presented
in order to arrive at a result able to globally ensure compliance with
the principle of a proper ordering of products.

4.1.5.1 First-order partial derivatives


The calculation of the partial derivatives of the cumulative probabil-
ity distribution function of the first hitting times with respect to the
time and the volatility requires some preliminary technical calcula-
tions, which are provided below. With regard to the sensitivity with
respect to the time, the fundamental monotonic relationship, already
mentioned in Section 4.1.2, will be confirmed; as far as the specific
influence of the volatility is concerned, an explicit formula will be
obtained and it will have to be studied according to suitable limit
representations (see Section 4.1.5.2) in order to gain useful insights
for the purposes of interest.
The reader may easily skip the details of the next three proposi-
tions and focus directly on Theorem 4.32.

204

i i

i i
i i

minenna 2011/9/5 13:53 page 205 #237


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Proposition 4.29. The following limit representations hold


 1 
(r rc 2 2 )t ln[NC /S0 ]
lim N = 0+ (4.82)
t0+ t
 
(r rc 12 2 )t ln[NC /S0 ]
lim+ N = 0+ (4.83)
t0 t

Proof As ln[NC/S0 ] > 0 it follows that

(r rc 12 2 )t ln[NC /S0 ] ln[NC /S0 ]


lim = lim+ =
t0+ t t0 t
(4.84)
so that, by definition of N ()
 
(r rc 12 2 )t ln[NC /S0 ]
lim+ N = lim+ N ()
t0 t t0
(4.85)

and, hence the first part of the thesis is proved, ie


 1 
(r rc 2 2 )t ln[NC /S0 ]
lim+ N = 0+ (4.82)
t0 t
By direct calculation of the limit
 
(r rc 12 2 )t ln[NC /S0 ]
lim+ N
t0 t
and using Equation 4.84
 
(r rc 12 2 )t ln[NC /S0 ]
lim+ N
t0 t
  1  
1 1 (r rc 2 2 )t ln[NC /S0 ] 2
= lim+ exp
t0 2 2 t
1
= exp( 12 []2 )
2
1
= 0+
2
and hence also the second part of the thesis is proved, ie
 

(r rc 12 2 )t ln[NC /S0 ]
lim N = 0+ (4.83)
t0+ t

205

i i

i i
i i

minenna 2011/9/5 13:53 page 206 #238


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Proposition 4.30. Let (r rc 12 2 ) 0. The quantities


 
(r rc 12 2 )t ln[NC /S0 ]
N
t
admit the following limit representations
 
(r rc 12 2 )t ln[NC /S0 ]
lim N = 0+ (4.86)
t t

Proof Let (r rc 12 2 ) 0.
By direct calculation of the limit
     
1 2 NC
lim N r rc t ln
t t 2 S0
Thus
     
1 2 NC
lim N r rc t ln
t t 2 S0
      2 
1 1 1 2 NC
= lim exp r rc t ln
t 2 2 t 2 S0
   2  2 
1 1 1
= lim exp (r rc) t
t 2 2 2
1
= exp( 12 []2 )
2
1
= 0+
2
and hence
 1 
(r rc 2 2 )t ln[NC /S0 ]
lim N = 0+ (4.86)
t t

Proposition 4.31. The quantity


 1 
(r rc 2 2 )t ln[NC /S0 ]
N
t
can be expressed in terms of the quantity
 
(r rc 12 2 )t ln[NC /S0 ]
N
t

206

i i

i i
i i

minenna 2011/9/5 13:53 page 207 #239


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

according to the following equality


 1 

(r rc 2 2 )t ln[NC /S0 ]
N
t
 (2(rrc)/ 2 )1  
NC (r rc 12 2 )t ln[NC /S0 ]
= N
S0 t
(4.87)
Proof By direct calculation
    
1 2 NC
N r rc t ln
t 2 S0
     2 
1 1 1 2 NC
= exp r rc t ln
2 2 t 2 S0
  2   
1 1 NC 2
= exp 2 r rc t ln
2 2 t 2 S0
and, after a little algebra
    
1 2 NC
N r rc t ln
t 2 S0
     
1 1 2 NC 2
= exp 2 r rc t ln
2 2 t 2 S0
 2  
NC
4 r rc t ln
2 S0
      
1 1 2 NC 2
= exp 2 r rc t ln
2 2 t 2 S0
    
1 2 NC
exp 4 ( r rc t ln
2 2 t 2 S0
It follows that
    
1 2 NC
N r rc t ln
t 2 S0
   2  
1 NC
= N r rc t ln
t 2 S0
   2  
1 NC
exp 4 r rc t ln
2 2 t 2 S0
which after some algebra gives
     
1 2 NC
N r rc t ln
t 2 S0
   2    2
NC 2/ (rrc /2)
2
1 NC
=N r rc t ln
t 2 S0 S0

207

i i

i i
i i

minenna 2011/9/5 13:53 page 208 #240


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and hence
 
(r rc 12 2 )t ln[NC /S0 ]
N
t
 2(rrc)/ 2 1  
NC (r rc 12 2 )t ln[NC /S0 ]
= N (4.87)
S0 t

Theorem 4.32. The first partial derivative of P( , t) with respect to


t admits the following explicit representation
 1 
P( , t) (r rc 2 2 )t ln[NC /S0 ] ln[NC /S0 ]
= N (4.88)
t t t3
This quantity is always strictly greater than 0.

Proof Recall the closed formula for P( , t), ie


 
(r rc 12 2 )t ln[NC /S0 ]
P( , t) = N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 
(r rc 12 2 )t ln[NC /S0 ]
N (4.72)
t
Then its first partial derivative with respect to t is given by
  
P( , t) (r rc 12 2 )t ln[NC /S0 ]
= N
t t t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 
(r rc 12 2 )t ln[NC /S0 ]
N
t t
 1 2 
(r rc 2 )t ln[NC /S0 ]
= N
t
 1 
(r rc 2 2 )t ln[NC /S0 ]

t t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 

(r rc 21 2 )t ln[NC /S0 ]
N
t
 1 2 
(r rc 2 )t ln[NC /S0 ]

t t

208

i i

i i
i i

minenna 2011/9/5 13:53 page 209 #241


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

By using Expression 4.87


 
P( , t)
(r rc 12 2 )t ln[NC /S0 ]
=N
t t
 1 
(r rc 2 2 )t ln[NC /S0 ]

t t
 1 2 
( r rc ) t ln[NC /S0 ]
+ N 2
t
 
( r rc 12 2 )t ln[NC /S0 ]

t t
Factorising
 
(r rc 12 2 )t ln[NC /S0 ]
N
t
and simplifying
 1   
P( , t) (r rc 2 2 )t ln[NC /S0 ] 2 ln[NC /S0 ]
= N
t t t t
 
(r rc 12 2 )t ln[NC /S0 ] ln[NC /S0 ]
= N
t t3
yields
 
P( , t)
(r rc 12 2 )t ln[NC /S0 ] ln[NC /S0 ]
=N (4.88)
t t t3
Since the quantities
 (2(rrc)/ 2 )1  
NC ln[NC /S0 ]
and
S0 t3
are strictly positive, and the functional
 
ln[NC /S0 ] (r rc 12 2 )t
N
t
is an exponential and so always greater than 0, it is immediately
clear that P( , t)/ t is always positive.

This theorem confirms that the derivative P( , t)/ t is always


positive, indicating that the cumulative distribution function of
the first hitting times is monotonic with respect to the time, as is

209

i i

i i
i i

minenna 2011/9/5 13:53 page 210 #242


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.10 Plot of the function P( , t)/ t with respect to the time t
(r rc = 3.5%, = 2.4%)

1.8

1.6

1.4

1.2
P(,t)/ t

1.0

0.8

0.6

0.4

0.2

0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)

Figure 4.11 Plot of the function P( , t)/ t with respect to the time t
(r rc = 3.5%, = 30%)

100
90
80
70
60
P(,t)/ t

50
40
30
20
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t (years)

shown by Figures 4.10 and 4.11 for the cases r rc 12 2 > 0 and
r rc 12 2 < 0, respectively.
Theorem 4.33 gives the closed formula of the first partial deriva-
tive of P( , t) with respect to the other fundamental independent
variable that affects its behaviour: the volatility.

210

i i

i i
i i

minenna 2011/9/5 13:53 page 211 #243


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Theorem 4.33. The first partial derivative of P( , t) with respect to


admits the following explicit representation
   
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
  1 2 
(r rc 2 )t ln[NC /S0 ] 2
N
t 2 t
  
(r rc 12 2 )t ln[NC /S0 ] 4(r rc)
N (4.89)
t 3
Proof Recalling the closed formula for P( , t) given by Expression
4.72 and taking its first partial derivative with respect to gives
  1 
P( , t) (r rc 2 2 )t ln[NC /S0 ]
= N
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
  1 
(r rc 2 2 )t ln[NC /S0 ]
N
t
  1 2 
2(r rc 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1 2 
( r rc )t ln [NC /S0 ]
= N 2
t
 
( r rc 12 2 )t ln[NC /S0 ]

t
 1 2 
2(r rc 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1 
(r rc 2 2 )t ln[NC /S0 ]

t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1 2 
2(r rc 2 ) ln[NC /S0 ]

2

211

i i

i i
i i

minenna 2011/9/5 13:53 page 212 #244


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

By using Expression 4.87


 
P( , t) 2(r rc 12 2 ) ln[NC /S0 ]
= exp
2
 
(r rc 12 2 )t ln[NC /S0 ]
N
t
 1 
(r rc 2 2 )t ln[NC /S0 ]

t
 1 2 
2(r rc 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1 2 
( r rc )t ln[NC /S0 ]
2
t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1 
2(r rc 2 2 ) ln[NC /S0 ]

2
Factorising
 
2(r rc 12 2 ) ln[NC /S0 ]
exp
2
 1 
(r rc 2 2 )t ln[NC /S0 ]
N
t
and simplifying
 
P( , t) 2(r rc 12 2 ) ln[NC /S0 ]
= exp
2
 
( r rc 21 2 )t ln[NC /S0 ]
N
 t
2 ln[NC /S0 ]

t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1 
2(r rc 2 2 ) ln[NC /S0 ]

2

212

i i

i i
i i

minenna 2011/9/5 13:53 page 213 #245


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

 
2(r rc 12 2 ) ln[NC /S0 ]
= exp
2
 1 
(r rc 2 2 )t ln[NC /S0 ] 2 ln[NC /S0 ]
N
t 2 t
 
2(r rc 12 2 ) ln[NC /S0 ]
+ exp
2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 
4(r rc) ln[NC /S0 ]

3
Factorising
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln exp
S0 2
gives Equation 4.89, ie
   
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
  1 2 
(r rc 2 )t ln[NC /S0 ] 2
N
2

t t
 1  
(r rc 2 2 )t ln[NC /S0 ] 4(r rc)
N
t 3

Even if it is available in analytical form, the behaviour of this par-


tial derivative is quite complex to determine. Figures 4.12 and 4.13
plot this derivative against time for different volatility levels; in Fig-
ure 4.12 the case where r rc 12 2 > 0 is considered, while Fig-
ure 4.13 covers the complementary case of r rc 12 2 < 0. The two
curves have very different behaviours.
In order to better understand the characteristics of this first
partial derivative, the next section provides some useful limit
representations.

4.1.5.2 Limit representations of the first-order partial derivative


with respect to the volatility
The following technical lemma is an intermediate step required to
study the behaviour of the first partial derivative with respect to
volatility. The reader may easily skip it without concern and go
straight to Theorem 4.35.

213

i i

i i
i i

minenna 2011/9/5 13:53 page 214 #246


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.12 Plot of the function P( , t)/ with respect to the


volatility (r rc = 3.5%, = 2.4%)

20

15
P(,t)/
10

5
0 1 2 3 4 5 6 7 8 9 10 11 12
t (years)

Figure 4.13 Plot of the function P( , t)/ with respect to the


volatility (r rc = 3.5%, = 30%)

0.8
0.7
0.6
0.5
P(,t)/

0.4
0.3
0.2
0.1
0
0.1
0.2
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40
t (years)

Lemma 4.34. Let r rc 12 2 > 0. The following limit representation


holds

(1/ t)N (1/( t)[(r rc 21 2 )t ln[NC /S0 ]])
lim
t N (1/( t)[(r rc 12 2 )t ln[NC /S0 ]])

r rc 12 2
= (4.90)

214

i i

i i
i i

minenna 2011/9/5 13:53 page 215 #247


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Proof By assumption r rc 12 2 > 0 and proceeding analogously


to Proposition 4.30, it follows that

(1/ t)N (1/( t)[(r rc 12 2 )t ln[NC /S0 ]])
lim
t N ((1/ t)[(r rc 12 2 )t ln[NC /S0 ]])
0+ N ()
=
N ()
0+
=
0+
This limit is not determined. In order to obtain an explicit form
for the above limit, LHpitals Theorem (Minenna 2006) is applied,
ie

(1/ t)N (1/( t)[(r rc 21 2 )t ln[NC /S0 ]])
lim
t N (1/( t)[(r rc 12 2 )t ln[NC /S0 ]])
     
1 1 NC
= lim N (r rc 12 2 )t ln
t t t t S0
     1 
1 NC
N (r rc 12 2 )t ln
t t S0
(4.91)
The right-hand side of Equation 4.91 can be re-expressed as
  1 
1 1 (r rc 2 2 )t ln[NC /S0 ]
lim 3 N
t 2 t t
  1 2 
1
(r rc 2 )t ln[NC /S0 ]
+ N
t t t
  1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1  
(r rc 2 2 )t ln[NC /S0 ] 1

t t
   1  
1 1 (r rc 2 2 )t ln[NC /S0 ] 1
= lim 3
t 2 t t t
   
1
(r rc 12 2 )t ln[NC /S0 ]
+ lim N
t t t t
  1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
 1  
(r rc 2 2 )t ln[NC /S0 ] 1

t t

215

i i

i i
i i

minenna 2011/9/5 13:53 page 216 #248


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

   1  
1 1 (r rc 2 2 )t + ln[NC /S0 ] 1
= lim
t 2 t3 t t
  1 2 
1 ( r rc 2 )t ln[NC /S0 ]
+ lim N
t t t
  1 2  
( r rc )t ln[NC /S0 ] 1
N 2
t
By exploiting the equality

N (x) = (x)N (x)

and simplifying, it follows that the right-hand side of Equation 4.91


thus equals
   1  
1 1 (r rc 2 2 )t + ln[NC /S0 ] 1
lim
t 2 t3 t t
  1 
1 (r rc 2 2 )t ln[NC /S0 ]
lim
t t t
   1 2  
1 1 (r rc 2 )t + ln[NC /S0 ] 1
= lim
t 2 t3 t t
(r rc 12 2 )t + ln[NC /S0 ]
+ lim
t t
  1 1 
1 1 1 (r rc 2 2 ) 1 ln[NC /S0 ]
= lim
t 2 t3 2 t 2 t3
(r rc 12 2 )t + ln[NC /S0 ]
+ lim
t t
 1
(r rc 12 2 )t ln[NC /S0 ]
= lim
t
1
(r rc 2 2 )t + ln[NC /S0 ]
+ lim
t t
and therefore
   
1
(r rc 12 2 )t ln[NC /S0 ]
lim N
t t t t
   1 2 1 
(r rc 2 )t ln[NC /S0 ]
N
t t
1 2
r rc 2
= (4.92)

216

i i

i i
i i

minenna 2011/9/5 13:53 page 217 #249


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Substituting the right-hand side of Equation 4.92 into the right-


hand side of Equation 4.91 yields
  1 
1 (r rc 2 2 )t ln[NC /S0 ]
lim N
t t t
 1
t
N
(r rc 12 2 )t ln[NC /S0 ]
r rc 12 2
= (4.90)

At this point it is possible to state some limit theorems that are


useful to study the tails of the function P( , t)/ .

Theorem 4.35. The following limit representation for P( , t)/


holds
P( , t)
lim = 0+ (4.93)
t0+

Proof Recall Equation 4.89


   
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
  1 
(r rc 2 2 )t ln[NC /S0 ] 2
N
2

t t
 1  
(r rc 2 2 )t ln[NC /S0 ] 4(r rc)
N
t 3
First it will be proved that the limit of interest is 0, and then further
analysis will discover its sign.
By direct calculation
P( , t)
lim
t0+
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
   1 2  
(r rc 2 )t ln[NC /S0 ] 2
lim+ N
t0 t 2 t
 1 2 
4(r rc) (r rc 2 )t ln[NC /S0 ]
lim N
3 t0+ t
(4.94)

217

i i

i i
i i

minenna 2011/9/5 13:53 page 218 #250


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

With regard to the first limit in the right-hand side of Equation


4.94, it suffices to note that
1
(r rc 2 2 )t ln[NC /S0 ] ln[NC /S0 ]
for t 0+
t t
By the exponential decay of N (x) it follows that
 
(r rc 21 2 )t ln[NC /S0 ]
N = o( t) for t 0+
t
and hence
   
(r rc 12 2 )t ln[NC /S0 ] 2
lim+ N = 0+ (4.95)
t0 t 2 t
With regard to the second limit in the right-hand side of Equa-
tion 4.94, by Expression 4.82, Proposition 4.29 leads to
 1 
4(r rc) (r rc 2 2 )t ln[NC /S0 ]
lim+ N = 0+ (4.96)
t0 3 t
By properly substituting the last two limits inside Equation 4.94,
it follows that
P( , t)
lim+
t0
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp (0+ 0+ )
S0 2
and so
P( , t)
lim =0 (4.97)

t0+

In order to determine the sign of P( , t)/ for t 0+ , it can


easily be proved that the term
 

(r rc 12 2 )t ln[NC /S0 ] 2
N
t 2 t
strictly dominates the term
 
(r rc 12 2 )t ln[NC /S0 ] 4(r rc)
N
t 3
for t positive and close enough to 0. In fact, such dominance implies
that limt0+ P( , t)/ = 0+ .
To this end, consider that for y < 1 it follows that ey /2 <
2

yey /2 , and hence


2

x x
1 y2 /2 y y2 /2
e dy < e dy for x < 1 (4.98)
2 2

218

i i

i i
i i

minenna 2011/9/5 13:53 page 219 #251


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Solving the integral appearing in the right-hand side of Equa-


tion 4.98 gives
x
y y2 /2 1
dy = ex /2 = N (x)
2
e
2 2
which leads to
N (x) < N (x) for x < 1 (4.99)
Now, setting

(r rc 12 2 )t ln[NC /S0 ]
x=
t
it follows that x for t 0+ . Accordingly, for t positive and
close enough to 0, the condition x < 1 is satisfied and, therefore,
Expression 4.99 becomes
 
(r rc 12 2 )t ln[NC /S0 ]
N
t
 
(r rc 12 2 )t ln[NC /S0 ]
< N (4.100)
t
From the above inequality it follows that
 
(r rc 12 2 )t ln[NC /S0 ] 2
N
2

t t
 1 
(r rc 2 2 )t ln[NC /S0 ] 4(r rc)
N
t 3
 1 2  

(r rc 2 )t ln[NC /S0 ] 2 4(r rc)
>N
t 2 t 3
which, recalling Equation 4.89, leads to
   
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
> ln exp
S0 2
 
(r rc 12 2 )t ln[NC /S0 ]
N
t
 
2 4(r rc)

2 t 3
Since
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln exp >0
S0 2

219

i i

i i
i i

minenna 2011/9/5 13:53 page 220 #252


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and
 
2 4(r rc)
lim+ = +
t0 2 t 3
it is proved that
P( , t)
lim+ = 0+ (4.93)
t0

This analytical derivation provides a useful insight for assess-


ing the behaviour of the first partial derivative with respect to the
volatility: for very short times, an increasing level of volatility is
always associated with a rising probability of hitting the barrier of
cost recovery; in other terms the volatility effect is always predom-
inant in a neighbourhood of 0 and this result is independent of the
size of the drift and of the volatility level.
This phenomenon is related to the stochastic nature of the volatil-
ity effect; in fact, the volatility controls the magnitude of the shocks
induced by the standard Brownian motion Wt which appears in Def-
inition 4.1 and, as known, is characterised by a potential unbounded
variation on an infinitesimal time interval. On the other hand, the
effect of the drift is deterministic and can be considered negligible
for times that are close enough to 0. This concept, related to the
variations of the geometric Brownian motion in continuous time,
finds a useful interpretation in the standard representation of the
corresponding stochastic differential equation in discrete time via a
typical Euler scheme, ie

St = (r rc)St t + St tt (4.101)

This is the discrete version of Equation 4.1, where t is a sample


realisation from a standard normal variable. It is evident that, for

very small t, the effect due to the random term tt is several
orders of magnitude greater than the effect of the drift term t. To
give an order of magnitude estimate, for t = one day (a small, but
reasonable, time scale), the drift term has an impact on the overall
price dynamics that is 15 times smaller than the influence of the
diffusive coefficient.
The next theorem, by studying the limit of P( , t)/ as t
1
when r rc 2 2 > 0, offers a useful contribution to a better

220

i i

i i
i i

minenna 2011/9/5 13:53 page 221 #253


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

formalisation of the conjecture (partially proved in Corollary 4.28


for r rc 12 2 < 0) that connects the condition of a positive drift
with the existence of an admissible region of finite times, that is, a
region of times where P( , t)/ < 0.

Theorem 4.36. Let r rc 12 2 > 0. The following limit representa-


tion for P( , t)/ holds

P( , t)
lim = 0 (4.102)
t

Proof Equation 4.89 is recalled

   
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
  1 2 
(r rc 2 )t ln[NC /S0 ] 2
N
2

t t
  
(r rc 12 2 )t ln[NC /S0 ] 4(r rc)
N
t 3

It will first be proved that the limit of interest is 0, and then further
analysis will discover its sign.
By direct calculation

P( , t)
lim
t
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
    
(r rc 12 2 )t ln[NC/S0 ] 2
lim N
t t 2 t
 1 
4(r rc) (r rc 2 2 )t ln[NC /S0 ]
lim N
3 t t
(4.103)

With regard to the first limit in the right-hand side of Equa-


tion 4.103, as N (x) is a bounded function, it easily follows that

  1  
(r rc 2 2 )t ln[NC /S0 ] 2
lim N =0 (4.104)
t t 2 t

221

i i

i i
i i

minenna 2011/9/5 13:53 page 222 #254


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

With regard to the first limit in the right-hand side of Equa-


tion 4.103, the assumption r rc 12 2 > 0 leads to
 
(r rc 12 2 )t ln[NC /S0 ]
lim N
t t
 
r rc 12 2 ln[NC /S0 ]
= lim N t (4.105)
t t
= N () (4.106)
and hence
 
(r rc 12 2 )t ln[NC /S0 ]
lim N =0 (4.107)
t t
By properly substituting the limits 4.104 and 4.107 into Equa-
tion 4.103, it follows that
P( , t)
lim =0 (4.108)
t
At this point, in order to determine the sign of P( , t)/ for
t , Equation 4.89 is re-expressed as
   
P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
 1 2 
(r rc 2 )t ln[NC /S0 ]
N
t
  1 2 
( r rc )t ln[NC /S0 ]
N 2
t
  
(r rc 12 2 )t ln[NC /S0 ] 1
N
t

2 4(r rc)
2
t 3
(4.109)
By exploiting Lemma 4.34, after a little algebra it follows that
  
(r rc 12 2 )t ln[NC /S0 ]
lim N
t t
  
(r rc 12 2 )t ln[NC /S0 ] 1
N
t

2 4(r rc)
2
t 3
2(r rc 12 2 ) 2
= (4.110)
3

222

i i

i i
i i

minenna 2011/9/5 13:53 page 223 #255


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

and thus
  1 
(r rc 2 2 )t ln[NC /S0 ]
lim N
t t
  
(r rc 12 2 )t ln[NC /S0 ] 1
N
t

2 4(r rc)
2 <0
t 3
(4.111)

From the inequality appearing in the previous expression and


given that

   
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln exp
S0 2
 1 
(r rc 2 2 )t ln[NC /S0 ]
N >0 (4.112)
t

by looking at Equation 4.109 it easily follows that

P( , t)
lim = 0 (4.102)
t

This result, at first glance, shows that the condition of a posi-


tive drift is enough to guarantee the existence of a region of finite
times at which it still holds that P( , t)/ < 0, even when the
volatility is low. In fact, given r and rc, the only way to ensure that
r rc 12 2 > 0 is to consider values of volatility that are reasonably
low. Since in a low-volatility framework the cumulative distribution
function of the first hitting times converges to 1,2 the magnitude of
the values taken by P( , t)/ , although always negative, decays
very quickly to negligible levels.
The following theorem considers how this partial derivative
performs as volatility becomes very large.

Theorem 4.37. The following limit representation for P( , t)/


holds
P( , t)
lim = 0 (4.113)

223

i i

i i
i i

minenna 2011/9/5 13:53 page 224 #256


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Proof Recall Equation 4.89


   
P( , t) NC 2(r rc 21 2 ) ln[NC /S0 ]
= ln exp
S0 2
  1 2 
(r rc 2 )t ln[NC /S0 ] 2
N
2

t t
  
(r rc 12 2 )t ln[NC /S0 ] 4(r rc)
N
t 3
Since N (x) and N (x) are bounded functions and
 
2(r rc 12 2 ) ln[NC /S0 ] S0
lim exp 2
= (4.114)
NC
it follows that
P( , t)
lim =0 (4.115)

In order to determine the sign of P( , t)/ for t , it must
be observed that both
 
ln[NC /S0 ]) 2(r rc 12 2 ) ln[NC /S0 ]
and exp
2 2
are positive quantities.
By direct calculation
(r rc 12 2 )t ln[NC /S0 ]
= O( ) for (4.116)
t
which leads to
 
(r rc 12 2 )t ln[NC /S0 ]
lim N =1 (4.117)
t
Given the exponential decay of N (x), from Equation 4.116 it
follows that
 1 

(r rc 2 2 )t ln[NC /S0 ] 1
N  2 for large enough
t
Substituting the above inequality into Expression 4.89 gives
P( , t)

 
ln[NC /S0 ] 2(r rc 12 2 ) ln[NC /S0 ]
 exp
3 2
  1 2  
1 2 (r rc 2 )t ln[NC /S0 ]
N 4(r rc)
t t
(4.118)

224

i i

i i
i i

minenna 2011/9/5 13:53 page 225 #257


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.14 Plot of the function P( , t)/ on the space ( , t)


(ic = 2%, r rc = 3.5%)

100
90 0.8
80 0.6
70 0.4

P(,t)/
60 0.2
t (years)

50 0.0
40 0.2
30 0.4
20 0.6
10 0.8
0
10 20 30 40 50 60 70 80 90
(%)

Figure 4.15 Plot of the function 2 P( , t)/ 2 on the space ( , t)


(ic = 2%, r rc = 3.5%)

100
90 0.8
80 0.6
70 0.4
2P(,t)/2

60 0.2
t (years)

50 0.0
40 0.2
30 0.4
20 0.6
10 0.8
0
10 20 30 40 50 60 70 80 90
(%)

and hence, from Equation 4.117, it follows that


P( , t)
lim = 0 (4.113)

This theorem states that for extremely high values of volatility the
only phenomenon that is really measurable by using the tool of the

225

i i

i i
i i

minenna 2011/9/5 13:53 page 226 #258


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

first partial derivative is the increasing number of trajectories that


have a hitting time equal to infinity (ie, the costs are never recov-
ered); the values assumed by P( , t)/ are negligible since the
cumulative probability distribution functions are nearly flat when
the volatility reaches these extrema. Indeed, in this environment the
cumulative distribution function tends to collapse to the value of the
asymptote S0 / NC, as seen in Proposition 4.26.
Figure 4.14 shows the behaviour of the function P( , t)/ by
using a spectrum chart in the space ( , T ) for a given choice of
r, rc and ic. Areas in red correspond to strict positive values for
P( , t)/ , while areas in blue refer to negative values. The green
areas represent values that are in the neighbourhood of 0.

4.1.5.3 Second-order partial derivatives


Theorem 4.38. The second-order partial derivative of P( , t) with
respect to , ie, 2 P( , t)/ 2 , can be represented as
   
2 P( , t) NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
2 S0 2
  1 2 
(r rc 2 )t ln[NC /S0 ]
c1 N
t
 1 2 
(r rc 2 )t ln[NC /S0 ]
+ c2 N (4.119)
t
where

2 ln[NC /S0 ](4(r rc)t + ln[NC /S0 ])
c1 = 5

t3




2
2
(r rc + 2 ) t
1 2 2
t

 

4(r rc) 4(r rc) ln[NC /S0 ]



c2 = 3 +
4 2
(4.120)

Proof The proof is left to the reader.

Figure 4.15 shows the behaviour of the function 2 P( , t)/ 2


by using a spectrum chart in the space ( , T ) for a given choice
of r, rc and ic. Areas in red correspond to strict positive values for
2 P( , t)/ 2 , while areas in blue refer to negative values. The green
areas represent values that are in the neighbourhood of 0.

226

i i

i i
i i

minenna 2011/9/5 13:53 page 227 #259


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Theorem 4.39. The second-order cross partial derivative of P( , t)


with respect to and t, ie, 2 P( , t)/ t, can be represented by
 1 
2 P( , t) ln[NC /S0 ] (r rc 2 2 )t ln[NC /S0 ]
= N
t t3 t
 
(ln[NC /S0 ] (r rc)t)2 1 t
(4.121)
3t 4
Proof Starting from Equation 4.88, ie
 
P( , t) (r rc 21 2 )t ln[NC /S0 ] ln[NC /S0 ]
= N
t t t3
and observing that
 
2 P( , t) P( , t)
=
t t
by direct calculation it follows that
 
2 P( , t) (r rc 12 2 )t ln[NC /S0 ]
=N
t t
 1 2 
( r rc 2 )t ln[NC /S0 ] ln[NC /S ]
0
t t3
  

(r rc 12 2 )t ln[NC /S0 ] ln[NC /S0 ]
+N
t 2 t3
Recalling the equality N (x) = (x)N (x), and after a little algebra
 1 
2 P( , t) ln[NC /S0 ] (r rc 2 2 )t ln[NC /S0 ]
= N
t t3 t
 1 2
(r rc 2 )t ln[NC /S0 ]

t
 1  
(r rc 2 2 )t ln[NC /S0 ] 1
+
t
By calculating the derivative with respect to appearing in this
last right-hand side, after some simplifications it follows that
 1 
2 P( , t) ln[NC /S0 ] (r rc 2 2 )t ln[NC /S0 ]
= N
t t3 t
 
(ln[NC /S0 ] (r rc)t)2 1 t

3t 4

227

i i

i i
i i

minenna 2011/9/5 13:53 page 228 #260


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.16 Plot of the function 2 P( , t)/ t on the space ( , t)


(ic = 2%, r rc = 3.5%)

50
45 0.8
40 0.6
35 0.4

2P(,t)/2
30 0.2
t (years)

25 0.0
20 0.2
15 0.4
10 0.6
5 0.8
0
5 10 15 20 25 30 35 40 45 50
(%)

Figure 4.16 summarises the main properties of the function


2 P( , t)/ t by using a spectrum chart in the space ( , T ), for a
given choice of r, rc and ic. Areas in red correspond to strict positive
values for 2 P( , t)/ t, while areas in blue refer to negative val-
ues. The green areas represent values that are in the neighbourhood
of 0.
The following technical corollaries determine the sign of the
second-order cross partial derivative 2 P( , t)/ t and its be-
haviour when t is negligible.
Corollary 4.40. Let (r rc) > 0. Then, the equation
2 P( , t)
=0 (4.122)
t
admits

two distinct and positive roots t1 and t2 if (r rc 12 2 ) > 0
only one positive root t1 if (r rc 1 2 ) < 0
2
(4.123)
Moreover, for all > 0 fixed, the sign of 2 P( , t)/ t is charac-
terised as follows.

2 P( , t)
> 0 for t < t1 , t > t2



t

if (r rc 12 2 ) > 0 (4.124)
2
P( , t)

< 0 for t1 < t < t2
t

228

i i

i i
i i

minenna 2011/9/5 13:53 page 229 #261


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON


2 P( , t)
>0 for t < t1



t

if (r rc 12 2 ) < 0 (4.125)
2
P( , t)

<0 for t > t1

t
Proof Recall Expression 4.121, ie
 1 
2 P( , t) ln[NC /S0 ] (r rc 2 2 )t ln[NC /S0 ]
= N
t t3 t
 
(ln[NC /S0 ] (r rc)t)2 1 t

3t 4
Let > 0 be fixed. Since NC /S0 > 1, t > 0 and N () > 0,
Equation 4.122 is equivalent to
(ln[NC /S0 ] (r rc)t)2 1 t
=0 (4.126)
3t 4
From Equation 4.126, some elementary algebra leads to
      
2 4 2 2 NC 2 NC
(r rc) t + 2(r rc) ln t + ln =0
4 S0 S0
(4.127)

which is a second-order equation of the type at2 + bt + c = 0 with


constant coefficients.
The corresponding discriminant
    
NC NC
= 2 2 + 4(r rc) ln + 2 ln2 (4.128)
S0 S0
is positive, so there exist two real solutions
  
NC

t1 = 4(r rc) ln + 2 2

S0


   



NC NC

2 + 4(r rc) ln
2 + ln
2 2


S0 S0





1



4(r rc)2 4

  
(4.129)
NC

t2 = 4(r rc) ln + 2 2



S0

    



NC NC

+ 2 2 + 4(r rc) ln + 2 ln 2



S0 S0



1



4(r rc)
2 4

229

i i

i i
i i

minenna 2011/9/5 13:53 page 230 #262


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

According to Descartes rule, the signs of t1 and t2 can be deter-


mined by studying the signs of the coefficients in Equation 4.127.
Clearly
    
NC NC
c = ln2 > 0 and b = 2 + 2(r rc) ln <0
S0 S0
Moreover a = (r rc)2 41 4 is positive if and only if r rc 12 2 >
0. Hence, Equation 4.123 follows.
The sign of the function 2 P( , t)/ t is easily characterised by
looking at the sign of the coefficient a of the second-order equation.
When r rc 12 2 > 0, a > 0; this implies that

2 P( , t)
> 0 for t < t1 , t > t2



t

(4.124)
2 P( , t)

< 0 for t1 < t < t2

t
If r rc 21 2 < 0, then

2 P( , t)
>0 for t < t1



t

(4.125)
2 P( , t)

<0 for t > t1

t

The next two figures show the plot of the function 2 P( , t)/ t
with respect to t for different volatility levels. Figure 4.17 is related
to the regime where r rc 12 2 > 0. It is straightforward to identify
the two solutions t1 and t2 and verify that the sign of 2 P( , t)/ t
satisfies Expression 4.124.
Figure 4.18 is related to the regime where r rc 12 2 < 0. It is
easy to identify the unique solution t1 and verify that the sign of the
function 2 P( , t)/ t is consistent with Expression 4.125.

Corollary 4.41. Let r rc > 0. The following limit representations


for the roots t1 and t2 in Equations 4.129 hold
ln[NC /S0 ]
lim t1 = lim t2 = (4.130)
0 0 r rc

Proof Since 0, the hypothesis r rc 12 2 > 0 is satisfied.


Accordingly, Equation 4.122 admits two distinct and positive roots

230

i i

i i
i i

minenna 2011/9/5 13:53 page 231 #263


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.17 Plot of the function 2 P( , t)/ t with respect to the time
t (r rc = 3.5%, = 2.4%)

160
140
120
100
2P(,t)/ t

80
60
40
20
0
20
40
0 1 2 3 4 5 6
t (years)

Figure 4.18 Plot of the function 2 P( , t)/ t with respect to the time
t (r rc = 3.5%, = 30%)

400
350
300
250
2P(,t)/ t

200
150
100
50
0
50
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
t (years)

whose analytical representation is given by Equation 4.129, ie


  
NC
t1 = 4(r rc) ln + 2 2
S0
    
NC NC
2 2 + 4(r rc) ln + 2 ln2
S0 S0
1

4(r rc)2 4

231

i i

i i
i i

minenna 2011/9/5 13:53 page 232 #264


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

  
NC
t2 = 4(r rc) ln + 2 2
S0
    
NC NC
+ 2 2 + 4(r rc) ln + 2 ln2
S0 S0
1

4(r rc)2 4
The limit of t1 as tends to 0 is given by
  
NC
lim t1 = lim 4(r rc) ln + 2 2
0 0 S0
   
NC 2
2 + 4(r rc) ln
2 + ln [NC /S0 ]
2
S0
1

4(r rc)2 4
4(r rc) ln[NC/S0 ]
= (4.131)
4(r rc)2
and, hence, simplifying

ln[NC /S0 ]
lim t1 = (4.132)
0 r rc
The limit of t2 as tends to 0 is given by
  
NC
lim t2 = lim 4(r rc) ln + 2 2
0 0 S0
    
NC 2 NC
+ 2 + 4(r rc) ln
2 + ln
2
S0 S0
1

4(r rc)2 4
4(r rc) ln[NC/S0 ]
= (4.133)
4(r rc)2
and, hence, simplifying

ln[NC /S0 ]
lim t2 = (4.134)
0 r rc
By combining Expressions 4.132 and 4.134, it follows that

ln[NC /S0 ]
lim t1 = lim t2 = (4.130)
0 0 r rc

232

i i

i i
i i

minenna 2011/9/5 13:53 page 233 #265


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

4.1.6 Existence and uniqueness of the minimum time horizon


for local correct ordering
The previous sections gradually introduced the key concepts for the
determination of the recommended investment horizon in relation
to the time of the cost-recovery event defined according to the weak
probabilistic characterisation of Section 4.1.2.
These concepts showed that, given the characteristics of the cumu-
lative probability distribution of the first hitting times and in order
to ensure the correct ordering of the different products in line with
the principle more volatility, more time, the identification of the
recommended time horizon must rely on an approach that, for
each product with a given level of volatility (and, hence, of risk),
defines the minimum of time for the cost recovery and the associated
probability level.
In addition, from the analytical results given so far it is clear that
the research for this minimum time horizon should be carried out
inside an admissible region of times that is characterised by the con-
dition P( , t)/ < 0. This latter condition is related to the sign
(positive) of the drift of the stochastic differential equation describ-
ing the dynamics of the products value over time. In particular, for
both r rc 12 2 < 0 and r rc 21 2 > 0 the positivity of the drift
ensures that there is a finite set of times at which the aforementioned
condition is satisfied.
By using some technical findings provided by the asymptotic
and the sensitivity analysis, this section will state the first funda-
mental theorem of the existence and uniqueness of a time horizon
where P( , t)/ = 0 (ideally corresponding to the intersection
between two almost identical cumulative distribution functions) and
which is compliant with a local correct sorting of different products.
Moreover, building on this theorem, other important features of this
unique time horizon will be discovered with regard to its minimum
time nature and its key role in identifying the minimum admissible
confidence level specific to any product.

Theorem 4.42. Let (r rc) > 0. For any fixed volatility level > 0,
there exists a unique time, which is denoted by T , such that

P( , t) 
 =0 (4.135)
 ,T

233

i i

i i
i i

minenna 2011/9/5 13:53 page 234 #266


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Proof From Theorem 4.35 it follows that



P( , t) 
 = 0+
lim (4.136)
t0 

As t tends to the sign of this first partial derivative changes.


In fact:

for (r rc 12 2 ) < 0 Corollary 4.28 gives



P( , t) 
 <0
lim (4.137)
t 

for (r rc 12 2 ) > 0 Theorem 4.36 gives



P( , t) 
 = 0
lim (4.138)
t 

By continuity there exists at least a time T ]0, [, such that



P( , t) 
 =0 (4.135)
 ,T

Now, by exploiting the asymptotic behaviour of P( , t)/ |


and its partial derivative with respect to t it will be proved that the
time T is unique.

Case 1. (r rc 12 2 ) < 0. From Equations 4.136, 4.137 and 4.125 it


can be easily inferred that
 
P( , t) 
 > 0, 2 P( , t) 
 >0
for 0 < t < t1
 t 



 

P( , t) 
 > 0, 2 P( , t) 
 <0
for t1 < t < T
 t 

 

P( , t)  2 P( , t) 

 < 0,  <0
for t > T

 t 

Accordingly, there exists a unique time T such that



P( , t) 
 =0
 ,T

234

i i

i i
i i

minenna 2011/9/5 13:53 page 235 #267


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Case 2. (r rc 12 2 ) > 0. From Equations 4.136, 4.138 and 4.124 it


can be easily inferred that
 
P( , t)  2 
 > 0, P( , t)  > 0 for 0 < t < t1


 t 



 


P( , t)  2
P( , t)  


 > 0,  < 0 for t 1 < t < T

t
 
(4.140)
P( , t)  
 < 0, P( , t)  < 0 for T < t < t2
2



 t 


 

P( , t)  2
P ( , t ) 

 < 0,  > 0 for t > t2

 
t

Also in this case, there exists a unique time T that satisfies


(( P( , t))/ )| ,T = 0.

The above theorem unambiguously indicates the lower bound of


the admissible region of times, as formalised in the next corollary.

Corollary 4.43. The time T such that



P( , t) 
 =0
 ,T

is the minimum time from which the function P( , t) is monotoni-


cally decreasing with respect to the volatility, ie
  
P( , t) 
 0
T = min t ]0, [ : (4.141)


Proof The proof follows easily from Theorem 4.42.

Definition 4.44. For any fixed volatility level > 0, let T be the
corresponding minimum time. Then, the minimum confidence level,
denoted by
min , is defined as



min = P( , T ) (4.142)

Corollary 4.45. Let > and >


min . Also let T and T be two
times such that
P( , T ) = P( , T ) = (4.143)

then the following property holds

T < T (4.144)

235

i i

i i
i i

minenna 2011/9/5 13:53 page 236 #268


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.19 Plot of the function P( , T ) with respect to the time t


(ic = 2%, r rc = 3.5%); low-volatility environment

= 2.65% = 6.15% = 10.15%


1.00

0.95

0.90
_

min
0.85

0.80

0.75

0.70

0.65
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
t (years)

Figure 4.20 Plot of the function P( , T ) with respect to the time t


(ic = 2%, r rc = 3.5%); high-volatility environment

= 30% = 42.65% = 54.9%


1.000
_

min
0.975

0.950

0.925

0.900

0.875

0.850
0 5 10 15 20 25 30 35 40 45 50
t (years)

The above corollary says that, for all the confidence levels that are
strictly greater than the minimum one, if the volatility increases then
the correspondent time horizon also increases. This phenomenon
is represented in Figures 4.19 and 4.20 in both a low and a high-
volatility environment.

236

i i

i i
i i

minenna 2011/9/5 13:53 page 237 #269


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.21 Plot of the function P( , t)/ and the function of the
minimum times on the space ( , t) (ic = 2%, r rc = 3.5%)

50
45 0.8
40 0.6
35 0.4
30

P(,t)/
t (years)

0.2
25 0.0
20 0.2
15 0.4
10 0.6
5 0.8
0
10 20 30 40 50 60 70 80 90
(%)

It is important to observe that Theorem 4.42 and, consequently,


Corollary 4.43 are enough to find the minimum time horizon which
guarantees in a local context the correct ordering of products with
different volatilities, but they do not suffice to state that the minimum
times obtained in correspondence of variable values of the volatility
(along the entire positive real line) always increase as the volatility
increases, which is required by the principle more volatility, more
time in order to ensure a global correct ordering. In other words, the
above technical results provide a local property of correct ordering
that does not imply that the set of the minimum times complies
with the more general property of monotonicity with respect to the
volatility.

4.1.7 The function of the minimum times


In order to determine whether, and how, these results should be
integrated to ensure a suitable ordering of the products at the global
level it is first necessary to investigate the behaviour of the minimum
time as a function of the volatility.
Definition 4.46. The function of the minimum times is defined as the
function that associates any volatility > 0 with the corresponding
minimum time T , ie

P( , t) 
 = 0 for all > 0
T : (4.145)
T

237

i i

i i
i i

minenna 2011/9/5 13:53 page 238 #270


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.21 shows the function of the minimum times superim-


posed on the spectrum chart of the function P( , t)/ , in order
to highlight the characterisation of T as the times where the first
partial derivative of P( , t) with respect to the volatility is equal to 0.
By looking at the chart, it emerges that the function T starts at a
time strictly greater than 0, reaches a maximum and then declines.
It will be proved that the function of the minimum times always sat-
isfies these properties, including the fact that, after having reached
its maximum value, it converges asymptotically to 0.
The next theorem studies the behaviour of the function T when
the volatility is very close to zero. The analytical results show that
this limit time exists, it is always non-zero and it has an interesting
interpretation from a financial point of view.
Theorem 4.47. The following limit representation for the function
of minimum times holds
ln[NC /S0 ]
lim T = (4.146)
0 r rc
Proof Since is very close to 0, the case r rc 12 2 > 0 holds.
Therefore, from Expression 4.140 it follows that
t1 < T < t2 (4.147)
where t1 and t2 are the roots of Equation 4.122.
By Corollary 4.41, it follows that
ln[NC /S0 ]
lim t1 = lim t2 = (4.130)
0 0 r rc
Combining the result in Equation 4.147 with Expression 4.130
yields
ln[NC /S0 ]
lim T = (4.146)
0 r rc

Definition 4.48. The limit time identified by Equation 4.146 is called


the technical minimum time and it is defined as
ln[NC /S0 ]
Tmin = (4.148)
r rc
Remark 4.49. The function of the minimum times T can be
extended continuously by defining its value at = 0 to be Tmin ,
ie
T0 = Tmin (4.149)

238

i i

i i
i i

minenna 2011/9/5 13:53 page 239 #271


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Corollary 4.50. The technical minimum time Tmin is always positive.

Corollary 4.51. The technical minimum time Tmin is the time at


which on average all the initial and running costs are recovered,
ie
EQ [STmin ] = NC (4.150)

Proof Recalling that the dynamics of the products value over time
are modelled through the stochastic differential equation 4.1, it
follows that
EQ [STmin ] = S0 exp((r rc)Tmin ) (4.151)

Combining this equality with Equation 4.148 yields

EQ [STmin ] = NC (4.150)

Corollary 4.51 shows that the time Tmin (obtained as the limit of
the function of the minimum times as volatility vanishes) coincides
with the first hitting time of the average path of {St }t0 as described
in Section 4.1.5, where it was denoted by tmean . From a financial point
of view it can be observed that only for times greater than Tmin is it
guaranteed that the expected return of the product will be strictly
positive for every possible value of the volatility. Intuitively, this
suggests that, when volatility begins to gradually increase, the cor-
responding minimum times (which can be on the curve T ) should
never be shorter than Tmin in order to preserve the consistency with
the principle more volatility, more time.
Unfortunately, the results obtained so far do not support the con-
clusion that the curve of minimum times will be monotonically
increasing or at least that it will not decrease below the value of
Tmin . On the contrary, the forthcoming analytical results will clarify
that the curve T is always characterised by the presence of a max-
imum and then decreases asymptotically to 0. This clearly implies
that for any point on the curve after the maximum time the above
principle will be violated, hence requiring the adoption of suitable
corrections aimed at guaranteeing a correct ordering of the dif-
ferent products from a global perspective, as will be discussed in
Section 4.1.8.

239

i i

i i
i i

minenna 2011/9/5 13:53 page 240 #272


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

At this stage the shape of the function of the minimum times


T must be investigated with respect to , exploiting some pre-
vious technical results related to the properties of the functions
P( , t)/ and 2 P( , t)/ t.

Lemma 4.52. Let > 0. The second-order cross partial derivative


2 P( , t)/ t evaluated at t = T , is always negative, ie

2 P( , t) 
 <0 (4.152)
t T

Proof If r rc 12 2 < 0, from the joint characterisation of the


signs of the quantities P( , t)/ , 2 P( , t)/ t given in Expres-
sion 4.139 and properly arranging for a generic volatility value, it
holds that
P( , t) 2 P( , t)
> 0, >0 for 0 < t < t1
t
P( , t) 2 P( , t)
> 0, <0 for t1 < t < T
t
P( , t) 2 P( , t)
< 0, <0 for t > T
t
which yields the proof directly.
The proof in the complementary case r rc 12 2 < 0 can easily
be obtained in a similar way.

Proposition 4.53. The function of the minimum times is derivable


with respect to and its first derivative, denoted by dT /d , can
be represented as

dT 2 P( , t)/ 2 |T
= 2 (4.153)
d P( , t)/ t|T

Proof Let g( , t) be the bivariate function defined as


P( , t)
g( , t) =

Clearly
g( , t) 2 P( , t)


=
2
(4.154)
g( , t) 2 P( , t)



=
t t

240

i i

i i
i i

minenna 2011/9/5 13:53 page 241 #273


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Moreover, from Definition 4.46, it follows that

g( , T ) = 0 for all > 0 (4.155)

By Equation 4.152 it follows that



g( , t) 
 <0
 (4.156)
t T

By Equations 4.155 and 4.156 it is possible to apply the implicit


function theorem, so that T is derivable and
dT g( , t)/ |T
= (4.157)
d g( , t) t|T
By substituting Expression 4.154 into Expression 4.157, it follows
that
dT 2 P( , t)/ 2 |T
= 2 (4.153)
d P( , t)/ t|T

Proposition 4.54. The following limit representation for the function


of the minimum times holds

lim T = 0 (4.158)

Proof Given the limit 4.113, it holds that


P( , t)
lim = 0 (4.113)

for every t > 0. This implies that
P( , t)
0 (4.159)

for large enough, and hence, by Corollary 4.43, that

T  t (4.160)

Since t > 0 is arbitrary, it can be easily inferred that

lim T = 0 (4.158)

Proposition 4.55. The function of the minimum times admits a


global maximum.

241

i i

i i
i i

minenna 2011/9/5 13:53 page 242 #274


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Proof The result follows from asymptotic limits of T given in Theo-


rem 4.47 and Proposition 4.54, respectively, and from the continuity
of T given by Proposition 4.53.

Notation 4.56. The maximum value identified by the previous


proposition is called the global maximum time and it is denoted
by Tmax
Tmax = max T (4.161)
0

The rationale behind the peculiar shape of the function of the min-
imum times can be synthesised as follows. Extremely high volatili-
ties (which are likely to be associated with the case r rc 12 2 < 0)
imply that a significant number of trajectories of the process {St }t0
will never hit the cost-recovery barrier. For a given volatility level
the probability of this event (ie, the percentage of the never hitting
trajectories) can be calculated explicitly by simply subtracting from
1 the asymptote of the function P( , t) as time tends to infinity.3
Conversely, the remaining trajectories will recover the costs in a
very short time, implicitly exhausting the positive influence of the
volatility effect very quickly (at the limit, in an infinitesimal time).
After this shrinking time, what remains is only the negative effect of
a decreasing asymptote as the volatility increases. This means that
when the volatility becomes increasingly large the region of times
where the first partial derivative of P( , t) with respect to this vari-
able is strictly negative gradually increases. At the limit, for
this region covers the entire space of times with the consequence
that the only time that satisfies the condition P( , t)/ = 0 tends
to 0 as indicated by Equation 4.158.
This phenomenon implies that, when the curve of the minimum
times T is approaching zero, even if the partial derivative with
respect to is zero, the noise due to the high volatility is so large
that this condition is unable to distinguish anything but the neg-
ative effect of a decreasing asymptote. Consequently, these times
cannot be considered significant enough to offer useful information.
An additional criterion is needed that can be used to discern the
acceptable times on the curve T .
The following technical lemmas are intermediate steps required
to fully understand the behaviour of the function of the minimum
times. The reader may easily skip them without concern and go
straight to Theorem 4.61.

242

i i

i i
i i

minenna 2011/9/5 13:53 page 243 #275


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Lemma 4.57. The function of the minimum times T solves the


following ordinary differential equation

dT
= F( , T )G( , T ) (4.162)
d
with initial condition
T0 = Tmin (4.163)

where
 
NC
F( , t) = (r rc + 12 2 )2 t2 + 2 t + ln2 (4.164)
S0
and

G( , t)
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
 2   
P( , t) 1 (r rc 12 2 )t ln[NC /S0 ] 4(r rc)
N
t t 6t
(4.165)

Moreover, the following inequality holds

G( , T ) > 0 (4.166)

Proof Recall Equation 4.153

dT 2 P( , t)/ 2 |T
= 2
d P( , t)/ t|T

By evaluating Equation 4.119 at t = T it follows that


    1 2 
2 P( , t) 
 = ln NC exp 2(r rc 2 ) ln[NC /S0 ]
2 T S0 2
  
(r rc 12 2 )T ln[NC /S0 ]
c1 N 
T
 
(r rc 12 2 )T ln[NC /S0 ]
+ c2 N 
T
(4.167)

243

i i

i i
i i

minenna 2011/9/5 13:53 page 244 #276


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

where

2 ln[NC /S0 ](4(r rc)T + ln[NC /S0 ])

c1 = 

5 (T )3



 
2
2
(r rc + 2 ) T 
1 2 2
(4.168)
T


 



4(r rc) 4(r rc) ln[NC /S0 ]

c2 = 3 +
4 2

By Definition 4.46 T satisfies



P( , t) 
 =0 (4.169)
T

hence, by Equation 4.89 it holds that

 
(r rc 2 2 )T ln[NC /S0 ]
1
N 
T
  
(r rc 12 2 )T ln[NC /S0 ] 2(r rc) T
=N  (4.170)
T

Thus, substituting Equation 4.170 into Equation 4.167, it follows that


    1 2 
2 P( , t) 
 = ln NC exp 2(r rc 2 ) ln[NC /S0 ]
2 T S0 2
 
(r rc 2 2 )T ln[NC /S0 ]
1
N 
T
  
2(r rc) T
c1 + c2 (4.171)

and after a little algebra

2 P( , t) 

2 T
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
= ln exp
S0 2
 1 2 
(r rc 2 )T ln[NC /S0 ]
N 
T
  
4(r rc) 1 2 2 2 2 2 NC
(r rc + 2 ) (T ) + T + ln
6t S0
(4.172)

244

i i

i i
i i

minenna 2011/9/5 13:53 page 245 #277


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Defining the functions F( , t) and G( , t) as in Equations 4.164


and 4.165, from Equations 4.172 and 4.153 it follows that

dT
= F( , T )G( , T ) (4.162)
d
The initial condition in Equation 4.163 is determined from Theo-
rem 4.47.
Since the terms
   
NC 2(r rc 12 2 ) ln[NC /S0 ]
ln , exp
S0 2
 1 2 
(r rc 2 )T ln[NC /S0 ] 4(r rc)
N  and
T 6t

are strictly positive, Lemma 4.52 leads to

G( , T ) > 0 (4.166)

Lemma 4.58. Let t > 0. The sign of F( , t) is determined as follows



F( , t) > 0 if t < ( )


F( , t) = 0 if t = ( ) (4.173)




F( , t) < 0 if t > ( )

where the auxiliary function ( ) is



1 2 1 2 2 1 2 2 2
2 + ( 2 ) + (r rc + 2 ) ln [NC /S0 ]
( ) = (4.174)
(r rc + 12 2 )2

Proof The result follows by a standard study of the following second-


order equation
   
2 2 2 NC
r rc + t + 2 t + ln2 =0 (4.175)
2 S0
which has only one positive root, denoted by ( ), given by

1 2 1 2 2 1 2 2 2
2 + ( 2 ) + (r rc + 2 ) ln [NC /S0 ]
( ) = (4.174)
(r rc + 12 2 )2

245

i i

i i
i i

minenna 2011/9/5 13:53 page 246 #278


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Lemma 4.59. The first derivative of the function ( ) can be written


as

(r rc)(1 ln2 [NC /S0 ]) 12 2 (1 + ln2 [NC /S0 ])


( ) =
c 3 c4
ln2 [NC /S0 ]
 (4.176)
1 1
( 2 2 )2 + (r rc + 2 2 )2 ln2 [NC /S0 ]

where
   2  
2 2 2 NC

c3 = (r rc) + + r rc + ln 2

2 2 2 S0

  2 2    
2 2 2 2 NC


c4 = (r rc) + + r rc + ln
2 2 2 S0
(4.177)
Moreover, it holds that
c3 c4 > 0 (4.178)

Proof The proof is left to the reader.

Lemma 4.60. The shape of ( ) is described as follows:

if NC /S0  e, then

( )  0 (4.179)

if NC /S0 < e, then



(r rc)(1 ln2 [NC /S0 ])


( ) > 0 if <

1 + ln2 [NC /S0 ]






(r rc)(1 ln [NC /S0 ])
2
( ) = 0 if =
(4.180)
1 + ln2 [NC /S0 ]






(r rc)(1 ln [NC /S0 ])
2



( ) < 0 if > 2
1 + ln [NC /S0 ]

Proof The result follows from Equation 4.176 and Inequality 4.178.

246

i i

i i
i i

minenna 2011/9/5 13:53 page 247 #279


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Theorem 4.61. The global maximum Tmax of the function of the


minimum times is unique. Moreover,
if NC/S0 < e, then Tmax > Tmin and

T is strictly increasing for < max
(4.181)
T is strictly decreasing for > max
where max is the volatility value such that Tmax = Tmax ;
if NC/S0  e, then Tmax = Tmin and the function T is strictly
decreasing.

Proof Combining Formulas 4.162 and 4.166 with Lemma 4.58 yields

dT
> 0 if T < ( )


d



dT

= 0 if T = ( ) (4.182)
d



dT


< 0 if T > ( )
d
Moreover, it holds that
(0) = Tmin (4.183)
By carefully comparing the functions ( ) and T the result
follows.

The following analysis will be developed with reference to real-


istic regimes of costs, ie, such that NC/S0 < e.
Corollary 4.62. The maximum time Tmax and the volatility max
are the unique values such that the following two equalities are
simultaneously satisfied

P( , t) 
 =0 (4.184)
 max ,Tmax

P( , t) 
2
 =0 (4.185)
2  max ,Tmax
meaning that no other local maximum of T exists.

Proof The proof is left to the reader.

Figure 4.22 shows the function of the minimum times superim-


posed on the spectrum chart of the function 2 P( , t)/ 2 in order
to highlight that the maximum time Tmax is the unique point where
both the first- and second-order partial derivatives of P( , t) with
respect to the volatility are 0.

247

i i

i i
i i

minenna 2011/9/5 13:53 page 248 #280


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.22 Plot of the function 2 P( , t)/ 2 on the space ( , t)


(ic = 2%, r rc = 3.5%)

50
45 0.8
40 0.6
35 0.4

2P(,t)/2
30 0.2
t (years)

25 0.0
20 0.2
15 0.4
10 0.6
5 0.8
0
10 20 30 40 50 60 70 80 90
(%)

4.1.8 Existence and uniqueness of the minimum time horizon


for a global correct ordering
In the previous section the maximum time was rigorously defined.
It is now possible to further characterise all the minimum times that
satisfy the requirement of an increasing monotonicity with respect
to the volatility, thus also ensuring a correct ordering of the different
products in a global context.
Theorem 4.63. The first derivative of the function of the minimum
times T is strictly increasing if and only if
0 < < max (4.186)
Moreover, the following inequality holds

2 P( , t) 
 >0 (4.187)
2 T
Proof From Theorem 4.61 it follows that the function of the minimum
times T is strictly increasing if and only if the volatility belongs to
the open interval ]0, max [, ie, if and only if
0 < < max (4.186)
The strict increasing monotonicity of T when the volatility satisfies
Equation 4.186 implies that in the interval ]0, max [ it holds that
dT
>0 (4.188)
d

248

i i

i i
i i

minenna 2011/9/5 13:53 page 249 #281


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

By Equation 4.153 the inequality 4.188 can be rewritten as


2 P( , t)/ 2 |T
>0 (4.189)
2 P( , t)/ t|T
Eventually, by exploiting the inequality 4.152, the inequality 4.189 is
equivalent to

2 P( , t) 
 >0 (4.187)
2 T

From the above theorem it is clear that the strict increasing mono-
tonicity of the function of the minimum times with respect to the
volatility holds only on the finite time interval ]Tmin , Tmax [ whose
width depends heavily on the regime of costs and on the interest
rates. Hence, in order to implement a practical method of deter-
mining the minimum recommended investment time horizon, it is
necessary to relax the strict monotonicity assumption, enabling a
less strong relationship between volatility and time, namely a weak
monotonicity of the minimum time horizon with respect to as
provided by the following definition.
Definition 4.64. Let (r rc) > 0. For any fixed volatility level > 0,
the minimum recommended investment time horizon is the time
T ]0, [ defined as

T if 0 < < max

T
=
T
(4.190)
max otherwise
where T satisfies 
P( , t) 
 =0 (4.135)
 ,T
According to this definition, if i and j are two different volatil-
ity values such that i < j , then their minimum recommended
investment time horizons satisfy the following property of weak
monotonicity with respect to the volatility
T
i  T j (4.191)
This correction presents several practical advantages: it is unam-
biguously defined, it allows all the minimum times associated with
extreme volatilities that experience a loss of significance to be dis-
carded and it also guarantees the correct ordering of products
characterised by different volatilities from a global perspective.

249

i i

i i
i i

minenna 2011/9/5 13:53 page 250 #282


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

4.1.9 Switching to a discrete volatility setting


This section refines the technical results of the previous sections in
order to properly determine the minimum recommended invest-
ment time horizon in a discrete volatility setting.

Lemma 4.65. Let {k }k=1,...,n be a discrete vector of volatilities cor-


responding to products with different levels of risk. In this setting

let k = k+1 k and also let Tk k be the time defined by

P(k+1 , T
k
k
) = P(k , T
k
k
) (4.192)

Then, the minimum time Tk characterised by Theorem 4.42, ie, by



P( , t) 
 =0 (4.193)
k ,Tk

is approximated by the time Tk k , ie

lim T
k
k
= Tk (4.194)
k 0

Proof Let h( , T ) be the function defined as




P( , t) P(k , t)

for k
k
h( , t) =  (4.195)


P( , t) 

for = k
k
By construction h( , t) is continuous; moreover, from Equa-
tion 4.193 it follows that

h(k , Tk ) = 0 (4.196)

Lemma 4.52 implies that



h( , t) 
 0 (4.197)
k ,Tk

By using the theorem of implicit functions, there exists a function


( ) such that
h( , ( )) = 0
(4.198)
(k ) = T k

In particular, for = k+1 there exists a time (k+1 ) such that

h(k+1 , (k+1 )) = 0 (4.199)

250

i i

i i
i i

minenna 2011/9/5 13:53 page 251 #283


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

By using Equation 4.195, it follows that


P(k+1 , (k+1 )) P(k , (k+1 ))
=0 (4.200)
k+1 k
which implies

P(k+1 , (k+1 )) P(k , (k+1 )) = 0 (4.201)

By the continuity property of the function ( )

lim (k+1 ) = (k ) (4.202)


k+1 k

Setting

k = k+1 k

and

T
k
k
= (k+1 )

and recalling that by Equation 4.198

(k ) = Tk

it follows that
lim T
k
k
= Tk (4.194)
k 0

and

P(k+1 , T
k
k
) = P(k , T
k
k
) (4.192)

Lemma 4.66. Let {k }k=1,...,n be a discrete vector of volatilities. In


this setting the condition given by

2 P( , t) 
 >0 (4.187)
2 T
which holds for 0 < < max , can be approximated by the following
inequality
T k +1
k +1
> T
k
k
(4.203)

Proof By hypothesis, Inequality 4.187 holds. By using Theorem 4.63,


the function T is strictly increasing with respect to , ie

Tk+1 > Tk for max > k+1 > k (4.204)

251

i i

i i
i i

minenna 2011/9/5 13:53 page 252 #284


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

By exploiting the limit relationship 4.194, ie


lim T
k
k
= Tk
k 0

and

lim T k +1
k +1
= Tk+1 (4.205)
k+1 0

it follows that
T k +1
k +1
> T
k
k
(4.203)

Definition 4.67. Let {k }k=1,...,n be a discrete vector of volatilities


and also let (r rc) > 0. For any given volatility k {k }k=1,...,n
the minimum recommended investment time horizon is the time
Tk ]0, [ defined as


T k
if Tk+k1+1 > Tk k
k

T k = (4.206)
max{Tk }k=1,...,n if Tk+1 < Tk
k k +1 k


where Tk k satisfies
P(k+1 , T
k
k
) = P(k , T
k
k
) (4.192)
These results may be effectively used to obtain a satisfactory
approximation of the curve of minimum times exploiting only the
cumulative probability distribution functions. In fact, if the distance
k is small enough, the intersection between two probability distri-
bution functions can be considered to be a good estimate of the corre-
sponding minimum time and the reiteration of this simple procedure
for different k suffices in order to obtain a reasonable approxima-
tion of the entire curve of minimum times. In this situation, it must
be emphasised that avoiding the calculation of partial derivatives of
different orders can be a practical advantage when a closed formula
is no longer available and subsequently there is a need to rely on
Monte Carlo simulation and numerical approximations of partial
derivatives.

4.1.10 Extensions to more general dynamics for the process


{St }t0
This section investigates the possibility of extending the method-
ology for the determination of the minimum recommended invest-
ment time horizon under more general assumptions about the model

252

i i

i i
i i

minenna 2011/9/5 13:53 page 253 #285


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

describing the stochastic dynamics of the products value over time.


The aim is to explore how such a methodology could be adapted in
order to fit modelling assumptions that better reflect the reality of
the markets and the specific features of the financial engineering of
non-equity products, such as in the case of benchmark structures.
What happens when the assumption of a constant interest rate
is removed will be analysed by considering first the case of a time-
varying deterministic drift, and then the case of stochastic drift. The
last part of this section is devoted to studying whether or not a
stochastic volatility setting could be problematic, with respect to the
general methodology developed in the previous sections.
Intuitively, in exploring possible modelling extensions regarding
the drift of the stochastic process {St }t0 , the fact that the entire
methodology only works if a specific condition on the drift is satis-
fied (namely, its positivity) must be taken into account. Hence, it is
quite natural to look at how this condition may be revised or mod-
ified when working with a more general modelling of the interest
rate term structure which, as is well known, directly affects the trend
of the products value under the principle of risk neutrality.
The following definition introduces the case of a time-varying
deterministic drift arising from the adoption of a deterministic term
structure of the interest rates.
Definition 4.68. Under the risk-neutral measure Q the dynamics
over time of a non-equity product with issue price equal to NC are
denoted by the stochastic process {St }t0 and are described by the
following stochastic differential equation
dSt = (rt rc)St dt + St dWt (4.207)
whose initial condition S0 is defined as
S0 = NC ic (4.2)
and where
ic > 0 are the initial costs charged,
rt is the time-varying deterministic risk-free rate,
rc denotes the constant running costs taken on a continuous-
time basis,
> 0 is the volatility of the product,
{Wt }t0 is a standard Brownian motion under Q.

253

i i

i i
i i

minenna 2011/9/5 13:53 page 254 #286


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Remark 4.69. The solution of the stochastic differential equa-


tion 4.207 is
 t 
1
St = S0 exp (rs rc 2 2 ) ds + Wt (4.208)
0

In this framework, the positive drift condition that is neces-


sary to qualify the existence and uniqueness of the minimum recom-
mended investment time horizon consistent with a correct ordering
of different products has to be evaluated with respect to the integral
of the instantaneous drift, ie

(rs rc) ds
0

Strictly speaking, apart from the possible dynamics of the inter-


est rate term structure in a shortmedium time interval, a positive
steady state for the curve when t becomes very large is needed in
order to completely satisfy the positive drift condition, ie, in formal
terms 
(rs rc) ds  0
0

By analysing what happens at intermediate times, it is straight-


forward to restate the definition of technical minimum time under
the hypothesis of a time-varying deterministic interest rate curve.

Proposition 4.70. Let



(rs rc) ds  0
0

The technical minimum time Tmin ]0, [ satisfies the following


condition
 Tmin  
NC
(rs rc) ds = ln (4.209)
0 S0

Proof Over the time interval [0, Tmin ] Equation 4.208 is


  Tmin 
STmin = S0 exp (rs rc 12 2 ) ds + (WTmin W0 ) (4.210)
0

As W0 = 0 and recalling from Corollary 4.51 that the technical


minimum time satisfies the equality

EQ [STmin ] = NC (4.150)

254

i i

i i
i i

minenna 2011/9/5 13:53 page 255 #287


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

it follows that
   Tmin 
EQ [STmin ] = EQ S0 exp (rs rc 12 2 ) ds + WTmin
0
  Tmin 
(rs rc 2 2 ) ds EQ [e WTmin ]
1
= S0 exp
0

Recalling that, for a normal random variable X N ( , 2 ), the


corresponding moment generating function is

(t) = E(etx ) = exp( 12 2 t2 + t)

and that WTmin N (0, Tmin ), it follows that


   Tmin 
Q Q
E [STmin ] = E S0 exp (rs rc 12 2 ) ds + WTmin
0
  Tmin 
= S0 exp (rs rc) ds
0
exp( 12 2 Tmin ) exp( 12 2 Tmin )

and hence   Tmin 


EQ [STmin ] = S0 exp (rs rc) ds (4.211)
0

Substituting Expression 4.211 inside Expression 4.150 yields


  Tmin 
S0 exp (rs rc) ds = NC (4.212)
0

which leads to  Tmin  


NC
(rs rc) ds = ln (4.209)
0 S0

It is worth noting that if there are not initial costs so that S0 = NC,
then Equation 4.209 simplifies to
 Tmin
(rs rc) ds = 0 (4.213)
0

This situation is represented in detail in Figure 4.23.


The availability of a closed formula for the cumulative probability
distribution of the first hitting times strongly depends on the specific
assumptions made about the deterministic behaviour of the interest
rates curve; having defined a function to describe the determinis-
tic process {rt }t0 it becomes possible to work out the change of
measure needed to take care of the time-varying deterministic drift.

255

i i

i i
i i

minenna 2011/9/5 13:53 page 256 #288


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.23 The technical minimum time under a time-varying


deterministic interest rates curve (ic = 0%)

3
(rt rc) > 0
rt (%)

rc
2

1 (rt rc) < 0


Tmin

0
0 1 2 3 4 5 6 7 8 9 10 11 10 12 13 14 15
t (years)

From a general point of view it is reasonable to assume that, with


the exception of a minority of cases, a closed formula in continuous
time for the cumulative probability distribution of the first hitting
times cannot be obtained. A more feasible alternative is to rely on
Monte Carlo simulations in a discrete time setting. Analogous con-
siderations hold when the deterministic variability of the drift arises
from time-dependent running costs {rct }t0 .
Regarding the modelling of the drift, more sophisticated ap-
proaches invariably include a stochastic representation of the term
structure of interest rates. A very common framework that embeds
stochastic interest rates is a direct extension of Equation 4.207 to a
system of stochastic differential equations like the following

dSt = (rt rc)St dt + St dW1,t (4.214)


drt = a(t, rt ) dt + b(t, rt ) dW2,t (4.215)

where {W1,t }t0 and {W2,t }t0 are two standard Brownian motions
linked by some correlation coefficient (ie, dW1,t dW2,t = dt).
Equation 4.215 governs the dynamics of the short rate; different
choices for the functionals a(t, rt ), b(t, rt ) can be made to obtain
the most common models that belong to the HeathJarrowMorton
(Heath et al 1992) family (eg, Vasicek, HoLee, HullWhite, Cox
IngersollRoss).

256

i i

i i
i i

minenna 2011/9/5 13:53 page 257 #289


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Although, even without closed formulas, it can be quite easy to


calculate the trajectories of the product and determine their first-
passage times for the barrier of cost recovery. However, there exist
some serious problems that undermine, in specific cases, the validity
of the methodology for obtaining the minimum recommended time
horizon.
These problems are basically related to the condition that requires
a positive drift in order to guarantee the existence and uniqueness of
the minimum time horizon; when the drift is governed by a stochas-
tic process, each trajectory shows a realised drift that is a random
variable heavily dependent on the initial level of the short rate, on the
interest rate volatility embedded in the model and on the running
costs applied. The minimum recommended time horizon may be
determined only when the assumptions and the parameters behind
the stochastic interest rate model guarantee that a time consistent
with the positive drift condition given in the extended form (see
Proposition 4.70) will always exist

(rs rc) ds  0
0

for all trajectories of the products value.


In general terms, when interest rate volatility is low, the require-
ment above will most likely be satisfied. On the other hand, when the
inherent volatility of the interest rates is high, the positive drift con-
dition could be satisfied only by excluding the trajectories that are
inconsistent with this condition from the analysis, which is clearly
unacceptable. In fact, any attempt to consider all the trajectories
including those with a negative drift would lead to a violation of the
founding principles of the methodology. Hence, the results of such
an experiment would be completely distorted, leading to the con-
tradictory result that more volatility implies a shorter time horizon.
Therefore, in these cases is not possible to determine the minimum
recommended time horizon in accordance with the methodology
described in previous sections, and the only feasible solution is to
return to the case of a time-varying deterministic drift described in
the extension of Definition 4.68 that eventually represents just the
average of the stochastic solution. In this regard it is also worth not-
ing that, from a substantive point of view, neglecting the random-
ness of the interest rates curve has no adverse consequences. This
is because the entire methodology underlying the calculation of the

257

i i

i i
i i

minenna 2011/9/5 13:53 page 258 #290


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

minimum time horizon works on the space of the times and not on
the space of the returns, the former being characterised by the need
for a diverse and somewhat poorer information set on the stochastic
behaviour of the products value. In fact, as will be discussed in Sec-
tion 4.1.11, on the space of the times each trajectory of the product
is considered only if and until it reaches the barrier corresponding
to the cost recovery (while the break does not exist when working
on the space of the returns). In this context, it should also be noted
that when interest rates are very volatile, the trajectories with posi-
tive drift disappear very rapidly from the analysis because they tend
to reach the barrier very quickly; therefore, after a certain time the
only surviving trajectories will be those featuring a negative drift,
meaning that these trajectories would be automatically and unfairly
overweighted and they would lead to an average drift significantly
lower than that consistent with the principle of risk neutrality.
The analysis described above may require a further specialisation
in relation to the financial structure of the product to be modelled
and, sometimes, also in relation to the specific asset-management
style adopted. This happens, for instance, in the case of risk-target
products that are committed to a given volatility range but where
the asset manager decides to take a specific skew with respect to
that range, or in the case of benchmark products. In similar situ-
ations, the time evolution of the products value can be described
by stochastic volatility models calibrated on the products volatility
term structure, such as the implied volatility of options written on
it and expiring at increasing maturities.4
From this perspective, a possible rearrangement of Equation 4.207
into a system of stochastic differential equations may be the follow-
ing

dSt = (rt rc)St dt + t St dW1,t (4.216)


dt = (t, t ) dt + (t, t ) dW2,t (4.217)

{W1,t }t0 and {W2,t }t0 are two standard Brownian motions linked
by some correlation coefficient (ie, dW1,t dW2,t = dt).
In this framework, since the more general assumptions affect only
the diffusive term of Equation 4.216, all the technical conditions
related to the existence and uniqueness of the minimum time hori-
zon continue to hold. Obviously, a closed formula for the cumulative
probability distribution of the first hitting times may not exist, but a

258

i i

i i
i i

minenna 2011/9/5 13:53 page 259 #291


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

simulative approach in a discrete-time environment can always be


implemented for the trajectories of the products value in order to
obtain a minimum recommended time horizon consistent with the
principles described in Sections 4.1.6 and 4.1.8 and, hence, with a
correct ordering of products with different volatility structures.

4.1.11 Technical remarks


When more sophisticated hypotheses are made on the dynamics of
the non-equity product, it is likely that a closed analytical formula
for the cumulative probability distribution of the first-passage times
for the cost-recovery barrier cannot be explicitly derived; in these
cases it is necessary to rely on Monte Carlo simulations to obtain the
trajectories of the product and numerically calculate the cumulative
probability distribution.
The choice of the discretisation time step for the Monte Carlo sim-
ulation is relevant, since it directly influences the occurrences of the
cost-recovery event over time. Intuitively, with a rough mesh (eg,
a monthly one), the event of hitting the barrier can be registered at
most every month, losing all the information on the process {St }t0
in the meantime, while in continuous time or with a finer discreti-
sation grid, typically daily, the hitting events are registered with a
higher frequency. This implies that, with larger discretisation time
steps, the first-passage times cumulate at a slower pace. Figure 4.24
summarises what happens to the shape of the cumulative distribu-
tion function for a fixed set of parameters by simply changing the
discretisation time step of the Monte Carlo simulation.
Clearly, these differences have a significant impact on the deter-
mination of the minimum recommended investment time horizon,
since the discretisation step will also influence the behaviour of
the first partial derivative of the cumulative probability distribu-
tion function with respect to the volatility. To better understand this
phenomenon, in Figure 4.25 the cumulative probability distribution
is displayed on the space ( , Q) for a given time t = t. In this space
the first partial derivative with respect to the volatility has an intu-
itive representation in terms of the slope of the plotted curve; at the
same time, using this representation it is possible to avoid the direct
calculation of the first partial derivative that, with a large discreti-
sation mesh, shows, for numerical reasons, an irregular behaviour
that is not easy to read.

259

i i

i i
i i

minenna 2011/9/5 13:53 page 260 #292


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.24 Plot of the function Q(S,NC  t) with respect to the time t
with different discretisation time steps (ic = 2%, r rc = 3.5%,
= 5.15%)

1.00
0.95
0.90
0.85
0.80
Q 0.75
0.70
0.65 Daily
Weekly
0.60
Monthly
0.55
0.50
0 2 4 6 8 10 12 14 16 18 20
t (years)

Figure 4.25 Plot of the function Q(S,NC  t) with respect to the


volatility with different discretisation time steps (ic = 2%,
r rc = 3.5%, t = 3 years)

1.00

Daily
0.95 Weekly
Monthly
0.90
Q
0.85

0.80

0.75
0 2 4 6 8 10 12 14 16
(%)

From Figure 4.25 it emerges that increasing the discretisation step


tends to reduce the magnitude of the values taken by the func-
tion Q[S,NC  t] for any , and at the same time implies that
Q[S,NC  t]/ |t will be zero in correspondence with values of
progressively larger. Reversing the perspective and considering the
time as a free variable and the volatility fixed at a level = , this
means that as the discretisation step increases the minimum time

260

i i

i i
i i

minenna 2011/9/5 13:53 page 261 #293


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

horizon decreases, ie, the condition



Q[S,NC  t] 

 =0
,T

occurs for reducing values of T .


The result of this analysis is therefore something unexpected and
counter-intuitive: the use of a coarse time grid (like a monthly dis-
cretisation step) that ignores most of the information on the stochas-
tic process of the products value, gives minimum recommended
time horizons that are shorter with respect to denser discretisation
steps. In the context of a practical implementation it is suggested that
the time step of the simulations be tuned to a level fine enough to cap-
ture the real features of the product to be studied; for example, if the
value of the product is published weekly on the market, implement-
ing at least a weekly discretisation time step becomes mandatory in
order to properly model the investors opportunities for an effective
cost recovery. However, in the context of a coherent implementation
of all of the three pillars of the risk-based approach, it appears rea-
sonable to extend the standard assumption of a daily discretisation
grid, used to calculate the probabilistic scenarios and the degree of
risk, to the determination of the minimum recommended investment
time horizon.
The last part of this section offers some practical advice for imple-
menting the procedure for determining the minimum recommended
time horizon in a discrete-timediscrete-volatility environment.
From the results of Section 4.1.9, it is known that a satisfactory
approximation of the condition of finding the time at which the first
partial derivative with respect to the volatility is zero is given by
the intersection point between two cumulative probability distribu-
tions with slightly different volatility levels. If this distance, namely
the , is small enough and shrinking to 0, it is possible to obtain
with a high degree of accuracy the minimum time as described in
Expression 4.192.
Figure 4.26 gives an intuitive representation of the overall proce-
dure. The red line represents the product with a fixed target volatil-
ity, namely = 1.6%, the green line is representative of a fictitious
product with a level of volatility slightly inferior, ie, = 1%, while
the blue line is representative of a fictitious product with a level of
volatility slightly greater, ie, = 2.2%. The level of approximation

261

i i

i i
i i

minenna 2011/9/5 13:53 page 262 #294


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.26 Procedure to determine the minimum time horizon in a


discrete time-discrete volatility setting (ic = 2%, r rc = 3.5%)

0.85
0
0.80 Q( T ) = Q( + T )

0.75 Q( T ) = Q( T )

0.70

0.65
T
* T
* T
* +

0.60
5.2 5.4 5.6 5.8 6.0 6.2 6.4 6.6
t (years)

Daily discretisation step.

is equal to 0.6%. The grey circle shows the exact minimum time
calculated when the volatility is supposed continuous, ie, 0.
In order to calculate the intersection points accurately, smooth
probability distribution curves are needed. In this approach, the
number of Monte Carlo simulations required to successfully imple-
ment the procedure is quite high, in the order of a million; this may
seem a significant computational burden, but bearing in mind that
the probability density of the first hitting times is often highly asym-
metrical (because the majority of the trajectories hit the barrier quite
early), the overall computational effort is not so cumbersome.
It is important to stress that, since the introduction of the two
additional curves associated with the volatility levels and
+ is related only to a purely technical procedure, there is no
need to simulate a new stand-alone product from scratch, but rather
it suffices to use the same random vector of innovations computed
for the simulated trajectories of the original product. In this way, the
numerical noise is reduced to minimum levels and the calculation
of the intersection points becomes feasible in every condition.
A final technical issue is related to the choice of the proper width
for the interval . Of course, there is not a unique that can be
considered optimal for any possible product; in fact, on the one hand

262

i i

i i
i i

minenna 2011/9/5 13:53 page 263 #295


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

a very small improves the theoretical accuracy of the numerical


approximation, but, on the other hand, with a that is too small
the results become heavily influenced by the numerical noise inher-
ent in the Monte Carlo approach. As a rule of thumb, a small
(in the order of 0.5%) may be chosen when the target volatility level
is reasonably low (eg, 5%), and higher values of may be chosen
for products characterised by higher volatility. Intuitively, when the
volatility is significant, generally the cumulative probability distri-
butions are nearly flat shortly after inception; in these conditions if
the is not large enough, it becomes very difficult to have a stable
(ie, not dependent on the random vector of innovations) estimate of
the time that nullifies the first partial derivative with respect to the
volatility.

4.2 THE RECOMMENDED TIME HORIZON FOR


RETURN-TARGET PRODUCTS
Return-target products, as seen in Chapter 2, are in all respects con-
tingent claims whose payoff structures are linked (often in a non-
linear way) to underlying assets or reference values and work over
a specific time horizon, which is therefore implicit in the engineering
of the product. Regardless of the presence of liquidity or liquidabil-
ity provisions (as the secondary markets may be inefficient or simply
not existent), the fair value of these products can be always calcu-
lated by taking as final time of reference their implicit time horizon
and using mark-to-model procedures to simulate the trajectories of
the products value until the expiry of this implicit time horizon.
This clearly indicates that all the valuable information related to the
structure of the product is unveiled correctly only at this precise time
horizon, and any other choice, if possible, may induce incorrect or
misleading representations.5
The identification of the implicit time horizon is rather simple
when the financial structure of the return-target product involved is
not overly sophisticated, but it requires a careful analysis when the
building scheme of the product is particularly complex. An imme-
diate identification may be performed for plain-vanilla bonds and
derivative structures featuring European characteristics that allow
the contractual exercise only at a predetermined date in the future:
in these cases the maturity of the product is evidently the implicit
time horizon of the financial investment. The problem connected

263

i i

i i
i i

minenna 2011/9/5 13:53 page 264 #296


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

with structured bonds that join together the bond-like component


with European derivatives of different weights is solved by looking
at which payoff stochastically dominates the others (typically, the
one of the bond-like component), and then by associating it with the
recommended investment time horizon of the product.
More complex considerations arise when trying to assess callable
structures that contractually allow exercise before maturity. In fact,
the implicit time used for pricing and hedging purposes that is rep-
resented by the expiry date of the contingent claim is flanked by a
possible optimal time connected with the eventual early exercise.
Obviously, this time is a path-dependent random variable that theo-
retically guarantees to the investor the maximisation of their return,
given a particular realisation of the variables underlying the prod-
ucts payoff. Unfortunately, given the stochastic nature of this opti-
mal time, its disclosure to the average investor in understandable
terms would be quite difficult because these structures are also often
embedded in more complex products characterised by components
of different origins.
However, since by construction callable products entail the possi-
bility of an early exit, they can be somewhat assimilated to products
endowed with liquidity features. Due to such liquidity features, a
minimum time horizon can be determined for these products by
applying the logic of the cost recovery shown in the previous sec-
tions, even if suitably modified to take into full account the different
types of financial structure (now return-target).
The next two sections address the issue of determining the rec-
ommended investment time horizon for return-target products in
relation to whether they are either illiquid or liquid, or they present
forms of liquidability enhancements or callable structures.

4.2.1 Illiquid products


The investor is not allowed to exit from illiquid products before the
expiry date, at least not without penalising economic conditions. In
fact, if they want to abandon the financial investment in advance
they will often get a price that is far from a fair value; in these cases
it is reasonable to assume that is completely impossible to liquidate
the product.
This means that the cost-recovery event may happen only at
the unique time that is allowed contractually. Hence, the cumula-
tive probability distribution function of the first hitting times will

264

i i

i i
i i

minenna 2011/9/5 13:53 page 265 #297


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.27 Plot of the function Q(S,NC  t) with respect to the time t
for an illiquid return-target product with implicit time horizon of five years
(ic = 2%)

1.0
0.9
0.8
0.7
0.6
Q 0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7
t (years)

assume the degenerate shape of the zero-volatility product shown


in Figure 4.27, which represents the probability that the value of
the product will be above the issue price NC at maturity. In other
words, the illiquidity of the return-target product naturally implies
the coincidence of the implicit time horizon with the minimum one.

4.2.2 Liquidity and liquidability


The presence of liquid and efficient secondary markets gives the
investor the chance of a fair exit from the investment at a quoted price
driven by the price discovery process. Obviously, liquidating the
investment at the fair price does not necessarily imply the investor
obtains at least the price paid to buy the product, since market condi-
tions may well reflect a value considerably below the costrecovery
barrier. Accordingly, the minimum recommended investment time
horizon would naturally supplement the information conveyed by
the probabilistic scenarios at the implicit time, having taken care of
the particular features of this wide class of products.
It must also be considered that a return-target product, even if
not traded in a secondary market, can embed in its financial engi-
neering callable components that give it a kind of liquidity feature.
These features should be properly studied in order to assess the
possibility of indicating to the investors at least a minimum time
horizon determined in line with the logic of the cost recovery, since

265

i i

i i
i i

minenna 2011/9/5 13:53 page 266 #298


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

an implicit time horizon is otherwise hard to disclose, as discussed


in Section 4.2.
Moreover, although illiquid, a return-target product could be
assisted by some services of liquidability enhancement aimed at
making the investment more appealing and increasing the likeli-
hood of an early redemption under secure conditions. These services
(which may exist even in the absence of a secondary market, for
example, when it is replaced by alternative trading venues specif-
ically arranged by the issuer) can be provided either by a direct
intervention into the products engineering or by introducing spe-
cific micro-structural rules in the trading venue where it is possible
to disinvest. As in the case of the credit spread locking described in
Section 2.2, liquidability enhancements work to mitigate the possi-
bility of negative fluctuations in the price of the product and, there-
fore, the risk of closing the investment at a loss before maturity.
Therefore, they have a close connection with the concept of break-
even cost since they are typically intended to favour the occurrence
of this event. Clearly, in order to adequately assess the impact of liq-
uidability solutions and to determine, in this perspective, the min-
imum time for the cost-recovery event, the specific characteristics
of these solutions must be studied and, at the same time, the other
aspects that qualify the risk profile of the return-target product need
to be properly analysed too.
In general terms, in the attempt to apply the logic of cost recov-
ery to return-target products via the tool of first-passage time
distributions, the following should be carefully considered.
1. Return-target products are usually characterised by a natural
death that defines a clear time limit for the simulated trajecto-
ries of the product; a minimum time horizon, if it exists, must
always be less than or equal to the implicit time horizon.6
2. Each return-target product is built by using a proprietary finan-
cial engineering scheme that uniquely characterises the prod-
uct in terms of risk and return over different implicit time hori-
zons. Consequently, the definition a priori of some standard
properties of a general methodology, such as those stated in
Section 4.1, is not justified in this framework, since the class
of return-target products lacks a homogeneous and shared
modelisation like the risk-target and benchmark classes. This
can be easily argued by observing that the dynamics of the

266

i i

i i
i i

minenna 2011/9/5 13:53 page 267 #299


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

Figure 4.28 Plot of the function Q(S,NC  t) with respect to the time t
for a liquid return-target product with implicit time horizon of five years
(ic = 2%)

1.0
0.9
0.8
0.7
0.6
Q 0.5
0.4
0.3
0.2
T=5
0.1
0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)

value of return-target products over time obviously cannot be


described by simple stochastic differential equations as the one
used in Definition 4.1.
3. Leaving the products structure unchanged, an increase in the
costs charged must always imply an increase in the minimum
recommended time horizon.
The above stylised facts imply that the cumulative probability dis-
tributions of the first-passage times associated with different return-
target product are not directly comparable, since they are representa-
tive of very diverse financial structures and usually refer to different
implicit time horizons.
Figures 4.28 and 4.29 present the cumulative probability distribu-
tions of the first hitting times associated with two different liquid
return-target products expiring in 5 and 10 years, respectively.
Since, after maturity T, both the products cease to exist, the trajec-
tories of the products values that do not hit the cost-recovery barrier
before the implicit horizon T cumulate at this date. This technical
trick is due to the fact that in this environment it is not possible to
associate the event of not recovering the costs with a hitting time
equal to infinity, since the interval of possible times is closed, mean-
ing that the minimum recommended time horizon must belong to
[0, T ].

267

i i

i i
i i

minenna 2011/9/5 13:53 page 268 #300


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 4.29 Plot of the function Q(S,NC  t) with respect to the time t
for a liquid return-target product with implicit time horizon of 10 years
(ic = 2%)

1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1 T = 10
0
0 1 2 3 4 5 6 7 8 9 10
t (years)

Given these premises, the idea more volatility, more time that
drives the procedure for the determination of the minimum rec-
ommended time horizon in the case of risk-target and benchmark
products becomes less plausible,7 while the problem of properly
exploiting the information contained in the probability distribution
of the first hitting times according to principles 1 and 3 of Section 4.1
remains open. Again, the choice of a unique confidence level for all
the products belonging to the category of return-target products that
unambiguously identifies the time horizon on the cumulative prob-
ability distribution function is not so appealing, since it introduces
a high degree of arbitrariness, which is hard to justify.
A practical and sound solution, very straightforward to imple-
ment, is to abandon the intuition to connect the minimum recom-
mended time horizon with a specific level of probability of cost
recovery, and to calculate the expected value of the random vari-
able of S,NC , after having properly modelled the stochastic pro-
cess {St }t0 of the products value. In formal terms, this implies the
following definition of minimum recommended time horizon for
return-target products.
Definition 4.71. The minimum recommended investment time
horizon for a return-target product is the time T ]0, T ] such that
T = EQ [S,NC ] (4.218)

268

i i

i i
i i

minenna 2011/9/5 13:53 page 269 #301


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

where T is the implicit time horizon associated with the specific


return-target product, if any.

This approach shows many valuable advantages, most of which


are related to its ability to ensure the consistency between the min-
imum recommended time horizon and the implicit time horizon of
the return-target product, if this latter horizon is not stochastic.

(a) Since the domain of the probability density function is finite,


the expected value assumes always finite values8 that are
contained in the interval [0, T ].
(b) If the value of the product never hits the barrier of cost recovery
within the implicit time T, then the minimum recommended
investment time horizon will coincide with the implicit time,
as happens for non-liquid investments.
(c) The time horizon produced by Expression 4.218 satisfies the
principle of coherence (principle 3) with respect to the costs
stated in Section 4.1.
(d) Calculating the time horizon by using the expected value over-
comes the problem of defining a reasonable criterion for the
selection of the confidence level.

From a general point of view, the expected value conveys to the


investors the core information embedded in the probability distri-
bution of the first hitting times. In fact, a minimum time very close to
the maturity of the product would be indicative of a liquid product
that cannot be considered liquidable by the investor, since the chance
of cost recovery prior the expiry date would be small. Conversely,
a time horizon close to 0 would be a signal of a financial structure
that is able to offer many chances of liquidating the investment in
favourable conditions well before the implicit time T.
The risk of the product does not show a functional relationship
with the Expression 4.218: in fact, a higher level of variability of
the simulated trajectories tends to increase the chance of hitting
the barrier very early, but also the trajectories that do not recover
the costs and cumulate at maturity tend to rise. The resulting effect
is uncertain but somewhat balanced, since it is unlikely that high-
volatility products will be associated with very short and very long
time horizons.

269

i i

i i
i i

minenna 2011/9/5 13:53 page 270 #302


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

4.3 CLOSING REMARKS


The third pillar of the risk-based approach is the recommended
investment time horizon. This acts as an indicator which expresses
a recommendation regarding the holding period of the non-equity
product, and is formulated in relation to the specific financial struc-
ture of the product in question (as categorised into one of the three
types discussed in Chapter 1) and consistent with the information
provided by the first two pillars of the approach.
Return-target products and products backed by a financial guar-
antee are contingent claims whose payoffs depend on the behaviour
of the underlying assets according to specific rules established over a
precise time horizon which is therefore implicit in their engineering.
Inside the first pillar, the price discovery process and the represen-
tation of the performance risk are carried out with reference to the
beginning and end of this horizon through the financial investment
table and the table of probabilistic scenarios, respectively. Moreover,
the degree of risk for these products arises from the study of their
potential returns over exactly the same time horizon. Accordingly,
as argued in Section 4.2, the implicit time horizon unambiguously
identifies the recommended holding period for the investment. If
the product is liquid or is assisted by some liquidity feature (such
as in the presence of callability or other similar provisions) or by
some liquidability enhancement that favours an earlier redemption
under partially safe conditions, then an additional minimum time
horizon can be determined to complete the information given to
investors. In fact, as explained in Section 4.2.2, the possibility of an
early exit justifies the indication of the minimum maturity within
which this possibility is exploited in a prudential way, ie, having at
least recovered, on average, the costs of the product.
The concept of cost recovery is the cornerstone of the method-
ology developed to determine the minimum recommended invest-
ment time horizon for risk-target and benchmark products whose
financial structure lacks an implicit time horizon. In fact, in order to
define a meaningful criterion to give useful advice about the invest-
ment time horizon, the solution proposed by the third pillar is the
minimum time within which the break-even of the costs will be
achieved in a well-specified probabilistic characterisation.
As clarified in Section 4.1, such a probabilistic characterisation
must adhere to some general intuitive principles of coherence,

270

i i

i i
i i

minenna 2011/9/5 13:53 page 271 #303


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

mainly concerning the increasing monotonicity of the minimum


time horizon with respect to the risk of the product. This more
volatility, more time requirement ensures the correct ordering of
products that have equal cost regimes but different volatilities.
A first strong characterisation of the cost-recovery event is ex-
plored in Section 4.1.1: the recommended time horizon is the time
at which the probability of reaching the barrier of the cost recov-
ery takes a predetermined value. However, this characterisation is
unable to allow a correct ordering and it is therefore discarded in
favour of a weak probabilistic characterisation of the cost-recovery
event that, under some well-posed conditions, is consistent with the
above requirement. The weak characterisation is based on the prob-
ability of the cost-recovery event: for any given time t this event
indicates that the product recovers the initial and running costs at
least once in the finite time interval ending at t. This probability
belongs to the cumulative distribution function of the first-passage
times of the products value for the cost-recovery barrier.
In a continuous setting where the stochastic process of the prod-
ucts value is described by a geometric Brownian motion, the closed
formula for the probability of cost recovery can be derived (see
Section 4.1.3) and analysed as a function P( , t) of volatility and
time, leading to some interesting findings reported in Sections 4.1.4
and 4.1.5. First, the cost-recovery event can have a maximum attain-
able probability which is less than 1 and specific for any product,
hence implying that any approach aimed at determining the recom-
mended time horizon by fixing a unique target probability must be
discarded. Secondly, when the process of the products value has
a positive drift, there exists an admissible region of times where
the function P( , t) behaves consistently with the principle more
volatility, more time: it is the region where the partial derivative
with respect to the volatility is negative.
These findings suggest a simple rule to determine the minimum
recommended time horizon as the time at which the aforementioned
partial derivative is equal to zero. As shown in Section 4.1.6, this
intuition is confirmed by a fundamental theorem of existence and
uniqueness of a minimum time horizon (and a corresponding mini-
mum admissible probability of cost recovery) compliant with a cor-
rect ordering of different products, although only locally. By study-
ing the behaviour of the function of these minimum times as the

271

i i

i i
i i

minenna 2011/9/5 13:53 page 272 #304


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

volatility varies, another fundamental theorem is derived (see Sec-


tion 4.1.8). If the requirement for a strong increasing monotonic-
ity of the minimum time with respect to the volatility is replaced
with a weak monotonicity requirement, then, under suitable con-
ditions, the minimum recommended time horizon can always be
determined consistently with a correct ordering of products with
different volatilities, in a global perspective.
Moving to a discrete volatility setting, these results are easily pre-
served by a few arrangements, as discussed in Section 4.1.9. They
also hold when more general dynamics for the value process of the
product are assumed (see Section 4.1.10), except in the case of a
stochastic drift which is not necessarily compatible with the key
condition of a positive drift. In this case it is therefore preferable to
use a time-varying deterministic drift, which, although neglecting
the randomness of the interest rates, has no adverse consequences.
Section 4.1.11 explains that, in a discrete time-discrete volatility
environment, the minimum recommended time horizon consistent
with the results obtained in the continuous setting is determined by
intersecting the cumulative distribution function of the first hitting
times of the product and those of products with a slightly different
volatility.
This chapter completes the part of this book devoted to describing
the quantitative methodologies underlying each of the three pillars
of the risk-based approach and the relationships existing between
these synthetic indicators. The next chapter will present the concrete
application of this approach to assess the risk-reward profile of some
non-equity products quite common in financial markets.

1 For return-target products traded on efficient markets or endowed with liquidability enhance-
ments that allow an early exit from the investment under favoured economic conditions, some
other elementary properties have to be taken in account. See Sections 4.2 and 4.2.2.
2 It is worth recalling that, for r rc 12 2 > 0, by Proposition 4.25 it holds that
lim P( , t) = 1
t

3 It is worth recalling that, by Proposition 4.25, the asymptote has an explicit formula, ie,
2
(NC /S0 )2(rrc)/ 1 .

4 Moreover, when considering actively managed benchmark products, the correct representa-
tion of their random dynamics may require the addition of some noise to the benchmarks
stochastic volatility model in order to make the recommended minimum time horizon sensi-
tive to the effect of possible departures from the benchmark due to specific decisions by the
asset manager.
5 Analogous considerations can easily be extended to non-equity products assisted by financial
guarantees.

272

i i

i i
i i

minenna 2011/9/5 13:53 page 273 #305


i i

THE THIRD PILLAR: RECOMMENDED INVESTMENT TIME HORIZON

6 This constraint does not apply only if the return-target product does not have an implicit (and
non-stochastic) time horizon.
7 Moreover, the technical difficulty of properly identifying and handling an unambiguous
source of volatility for the heterogeneity of financial structures that belong to the category of
return-target products seems very hard to overcome.
8 This condition is not satisfied in the framework described in Section 4.1. In fact, if (r
rc 12 2 ) < 0, then EQ [S,CN ] = .

273

i i

i i
i i

minenna 2011/9/5 13:53 page 274 #306


i i

i i

i i
i i

minenna 2011/9/5 13:53 page 275 #307


i i

Some Applications of the


Risk-Based Approach

This chapter presents the concrete application of the risk-based


approach to some non-equity financial products. Although the
examples reported do not represent specific real products, they will
show how the three pillars of this approach are effectively able to
disclose in a clear, synthetic and objective way all of the relevant
information about the risks embedded in the financial structure of
every product that can be offered in the market.
Five non-equity structures will be illustrated and analysed in the
sections below, namely:
1. a risk-target product;
2. a benchmark product;
3. a plain-vanilla bond with a significant credit risk exposure;
4. a variable proportion portfolio insurance (VPPI) product;
5. an index-linked certificate.
The above products cover the three main financial structures that
compose the universe of non-equity products according to the clas-
sification outlined in the introduction of this book and taken as
fundamental reference throughout all the chapters.
The first two examples considered regard two open-ended mutual
funds whose financial engineering and asset-management policy
perfectly qualify them as a risk-target product and a benchmark
product, respectively.
The last three examples are representative of some of the many
possible specifications of return-target structures. In these structures
the key role of financial engineering is often more difficult to prop-
erly capture and understand, and it can arise from a variety of risk
factors. The plain-vanilla bond considered is mainly exposed to the

275

i i

i i
i i

minenna 2011/9/5 13:53 page 276 #308


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

risk of the occurrence of a credit event related to the reference entity


(in this example, the issuing bank) and to the inherent variability
of the term structure of interest rates; the VPPI product is subject
both to interest rate risk and to the risk of failure of the protection
mechanism; the last product is a pure derivative and it is completely
exposed to the market risk. In order to handle the bigger richness
and complexity of financial solutions that characterise return-target
products, a more detailed information set is provided by the risk-
based approach. In fact, the financial investment table, the degree of
risk and the recommended investment time horizon are flanked by
the table of probabilistic scenarios.
For each of the five products considered, the corresponding prod-
uct information sheet, which contains a brief description of the prod-
ucts characteristics and the synthetic indicators identified by the
three quantitative pillars, will be reported. Useful comments and
technical details will also be provided. In particular, where necessary,
the following are reported:
a chart of the historical (or simulated) time series of the
variables involved in the analysis of the product;
a chart of the cumulative probability distribution of the first hit-
ting times of the cost-recovery barrier, necessary to determine
the minimum recommended investment time horizon;
a chart of the probability densities of the final values of the
product and of the risk-free asset calculated at the implicit
time horizon (only for the three return-target products).
The variables connected with the various risk factors underly-
ing the products have been calibrated using data that refers to the
Eurozone market conditions in December 2010.
In order to complete the illustration of the possible applications
of the risk-based approach, the last section of this chapter gives an
example of a non-equity structure, specifically an interest rate collar,1
that intervenes to modify the cashflows and the risk profile of an
existing fixed-rate liability. The probabilistic comparison of the final
values of the liability before and after the insertion of the collar is
performed according to the methodology exposed in Section 2.3.4
and it allows the assessment of whether the switch to the new struc-
tured liability (where the collar replaces the fixed-rate) is or is not
suitable.

276

i i

i i
i i

minenna 2011/9/5 13:53 page 277 #309


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.1 Product information sheet for a risk-target non-equity


product

5.1 A RISK-TARGET PRODUCT

The non-equity product considered in this section belongs to the


class of risk-target structures. It is an open-ended mutual fund newly
instituted with a flexible asset-management style. The asset man-
ager can select the assets to be included in the portfolio of the fund
with the maximum freedom (no constraints to pick any specific asset
class) provided that the annualised volatility of the funds returns
is constantly maintained inside the 24% range. Subject to this con-
dition, the asset manager will work to maximise the performances
of the product by adopting the investment policy that they consider
most appropriate on the basis of their skills and knowledge of the
markets.
The fund is charged with an entry fee of 1.5% on the notional
amount and with running costs of 1% on an annual basis. No exit
charges are applied.
Figure 5.1 shows the product information sheet for this fund,
which gives a brief overview of the product followed by the repre-
sentation of its the riskreturn profile according to the three pillars
of the risk-based approach.

277

i i

i i
i i

minenna 2011/9/5 13:53 page 278 #310


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 5.2 Plot of the cumulative distributions of the first-passage


times used to determine the minimum recommended investment time
horizon of the risk-target non-equity product

1.00

0.95

0.90

0.85

0.80

0.75

0.70 = 2%
= 3%
0.65 + = 4%

0.60
0 1 2 3 4 5 6
t (years)

The degree of risk of the fund is medium; this classification comes


by mapping the target volatility range of the product (ie, 24%) to
the optimal grid reported in Figure 3.1.
The minimum recommended investment time horizon is calcu-
lated by applying the methodology described in Section 4.1. Fig-
ure 5.2 shows the cumulative probability distribution of the first hit-
ting times corresponding to the cost-recovery event for the mutual
fund analysed. Similar cumulative probability distributions also
characterise funds with the same costs and a volatility close to the
one of the product examined. The minimum time horizon of four
years is easily identified by implementing the practical procedure
described in Section 4.1.11.
The overall charges are calculated on the minimum recommended
investment time horizon by adding the entry fee to the discounted
expected value of the running costs charged over the period corre-
sponding to such a minimum time horizon in accordance with the
discussion in Section 2.2.

5.2 A BENCHMARK PRODUCT


The non-equity product considered in this section belongs to the
class of benchmark structures. It is an open-ended mutual fund
whose investment policy is anchored in the behaviour of an equity

278

i i

i i
i i

minenna 2011/9/5 13:53 page 279 #311


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.3 Product information sheet for a benchmark non-equity


product

Figure 5.4 Historical values and daily returns of the benchmark


non-equity product (October 2009January 2011)

(a)
115
110
105
100
95
90
85
80

(b)
10
5
% 0
5
Oct Dec Feb Apr Jun Aug Oct Dec
2009 2009 2010 2010 2010 2010 2010 2010

Benchmark product: (a) net asset value; (b) daily returns.

index that summarises the trend of the main sectorial leaders in


the Eurozone, namely the EuroStoxx-50. Deviations from the bench-
mark can arise due to the role played by an active management style
aimed at beating the benchmark.

279

i i

i i
i i

minenna 2011/9/5 13:53 page 280 #312


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 5.5 Historical annualised daily volatility of the benchmark


non-equity product (October 2010January 2011)

25

24

23

22
Oct Nov Dec Jan
2010 2010 2010 2011

The mutual fund is five years old. Consequently, in order to deter-


mine its current degree of risk, the methodology described in Sec-
tion 3.7 must be applied. Specifically, the annualised volatility of the
daily returns of the product over the last year needs to be calculated
for three consecutive months, in order to map the resulting values
with respect to the optimal grid reported in Section 3.5.4.
The costs incurred by the investors are given by an entry fee of
1.8% on the notional amount and an annual ongoing charge of 1.5%.
No exit charges are applied.
Figure 5.3 shows the product information sheet for this fund
which gives a brief overview of the product followed by the repre-
sentation of its the riskreturn profile according to the three pillars
of the risk-based approach.
Figure 5.4 illustrates the historical time series of the products
value together with the corresponding daily returns over the period
October 2009January 2011. In fact, in line with the contents of Sec-
tion 3.7, a dataset of 15 months of raw data is necessary to determine
the historical time series of the volatility to be used for the calculation
of the degree of risk of the product.
Figure 5.5 illustrates the historical time series of the annualised
volatility over the period October 2010January 2011. Each volatility
value is calculated using the last 252 daily returns.
During these three months the annualised volatility of the prod-
ucts daily returns was between 23% and 25%. By comparing these

280

i i

i i
i i

minenna 2011/9/5 13:53 page 281 #313


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.6 Plot of the first-passage times cumulative distributions used


to determine the minimum investment time horizon of the benchmark
product

0.975

0.970

0.965

0.960

= 21%
0.955 = 23%
+ = 25%

0.950
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
t (years)

values with the seven volatility intervals of the optimal volatility


grid reported in Figure 3.1, it therefore follows that the risk class of
this fund is high.
The minimum recommended investment time horizon is calcu-
lated by applying the methodology described in Section 4.1. Fig-
ure 5.6 shows the cumulative probability distribution of the first
hitting times corresponding to the cost-recovery event for the fund
analysed. Similar cumulative probability distributions also charac-
terise funds with the same costs and a volatility close to that of the
product examined. The minimum time horizon of 13 years is eas-
ily identified by implementing the practical procedure described in
Section 4.1.11.
The overall charges are calculated on the minimum recommended
investment time horizon by summing the entry fee to the discounted
expected value of the running costs charged over the period spanned
by such a minimum time horizon in accordance with Section 2.2.

5.3 RETURN-TARGET PRODUCTS: THE CASE OF A


PLAIN-VANILLA BOND WITH SIGNIFICANT CREDIT RISK
The non-equity product considered in this section belongs to the
class of return-target structures. It is a five-year senior bond with
a mixed coupon structure (fixed and then floating plus a spread)

281

i i

i i
i i

minenna 2011/9/5 13:53 page 282 #314


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

issued by a bank whose average annual credit spread over the period
spanned by the life of the product is around 125 basis points (bp).
The issuer provides a service of liquidability enhancement which
consists in locking the credit spread used to determine the fair value
of a bond on the secondary venue at a value determined at a time
close to the issue date and in taking the commitment to buy back the
bond at any early date decided by the investor.
Figure 5.7 shows the product information sheet for this bond,
which gives a brief overview of the product, followed by the repre-
sentation of its the riskreturn profile according to the three pillars
of the risk-based approach.
The values reported in the financial investment table (including
the fair value and its decomposition in the risky and the risk-free
components) are calculated according to the methodology described
in Sections 2.2 and 2.3.1. The net fair value represents the expected
discounted value of the pure payoffs structure of the product, while
the gross fair value (which includes the fair value of the service
of liquidability enhancement) also depends on the specific micro-
structural conditions of the trading venue available if exiting before
maturity. Investors interested in the possibility of an early redemp-
tion should consider the liquidability enhancement as part of the fair
value of the product, while buy-and-hold investors should consider
the value of this service as a pure cost item.
Figure 5.8 illustrates the graphical comparison between the risk-
neutral densities of the final values of the product and of the risk-free
asset, respectively.
The table of probabilistic scenarios as it appears in the product
information sheet of Figure 5.7 is obtained by applying to the above
densities the superimposition technique described in Section 2.3.3,
Chapter 2.
The bimodal shape exhibited by the probability density of the
bond reflects the default risk of the issuer under the standard hypoth-
esis of a recovery value of 40% for a senior note. The probability of
realising negative returns is around 9.5% and it discloses clearly the
credit risk of the issuer as resulting from an annual average credit
spread of around 125bp over a time horizon of five years. Most of
the remaining probability mass of this bond is placed in the scenario
in line with the final value of the risk-free asset which, in fact,
has a probability of more than 87%. In order to better explain the

282

i i

i i
i i

minenna 2011/9/5 13:53 page 283 #315


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.7 Product information sheet for a return-target non-equity


product: a five-year senior bond

behaviour of the product with respect to the risk-free asset, the con-
ditional mean of the non-equity product corresponding to this sce-
nario is flanked by the mean value of an investment in the risk-free
asset with the same maturity. This choice follows from the analysis
performed in Section 2.3.3. In the case considered in this example,
this additional information clarifies that the bond performs better
than the risk-free asset, the former having a mean value of 115.6 and
the latter of 114.1.
In order to complete the representation of the riskreturn profile
of this product, it can be seen that the spread paid by the issuer

283

i i

i i
i i

minenna 2011/9/5 13:53 page 284 #316


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 5.8 Partition of the risk-neutral density of the bond with respect
to the point of zero return and to the two fixed positive thresholds
identified with the superimposition technique

Density of the risk-free asset Density of the product


3.0

2.5

2.0
104

1.5

1.0

0.5

0
20 40 60 80 100 120 140 160 180

(corresponding to an average value of 37bp) is not enough to com-


pensate their credit spread of around 125bp, even if the value of
the liquidability enhancement service is included in the assessment.
This point can be appreciated by looking at the fair value reported in
the financial investment table that is strictly less than 100 also when
the liquidability service is taken in account (gross fair value equal to
97.9%), with the impact of the costs quantified in percentage terms
as 2.1%.
Figure 5.9 illustrates some simulated trajectories of the bond over
the implicit time horizon of five years.
The degree of risk is medium, corresponding to an average
annualised volatility of around 3.8%. As explained in Section 3.6,
the risk classification of this product requires the preliminary assess-
ment of all the risk factors of the investment including the credit risk.
It is worth recalling that the effect of the latter source of risk on the
value of the product over time can be captured by looking at the
jumps experienced by some trajectories which default.
The recommended investment time horizon corresponds to the
maturity of the bond and it is therefore equal to five years. More-
over, the presence of a form of liquidability enhancement aimed
at increasing the likelihood of an early redemption under secure
conditions can be disclosed to the investor by supplementing the

284

i i

i i
i i

minenna 2011/9/5 13:53 page 285 #317


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.9 Trajectories of a five-year senior coupon bond

140
130
120
110
Bond value

100
90
80
70
60
50
40
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)

Figure 5.10 Plot of the cumulative probability distribution of the


first-passage times used to determine the minimum time horizon of the
bond (blue line)

1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0
t (years)

The red line represents the unique possible holding period for this product under
the hypothesis of absence of the liquidability enhancement.

information on the recommended time horizon with the indication


of the minimum time horizon obtained according to the criterion
of the cost recovery, as explained in Section 4.2. Figure 5.10 shows
the cumulative probability distribution (the blue line) of the first
times the value of the bond hits the barrier of the issue price; the
expected value of the corresponding density function is around 1.5

285

i i

i i
i i

minenna 2011/9/5 13:53 page 286 #318


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

years. It is worth recalling that this information is valuable only for


those investors who are interested in the possibility of liquidating
the bond before the maturity; while for buy-and-hold investors the
minimum time horizon collapses into the one which is implicit in
the financial structure of the bond as represented by the red line, ie,
the probability distribution of the first hitting times when no early
exit is allowed.
Figure 5.10 demonstrates a final technical detail in this regard,
namely, the difference in the number of trajectories that do not
recover the costs until maturity. This number is higher in the case
of complete illiquidity of the bond, since the investors could not
benefit from an early exit from the investment and would bear the
complete exposure to the credit risk of the issuer for the entire life
of the product.

5.4 RETURN-TARGET PRODUCTS: THE CASE OF A VPPI


PRODUCT
The non-equity product considered in this section belongs to the
class of return-target structures. It is a typical protected fund whose
investment policy over time is driven by a VPPI technique; in partic-
ular, its financial engineering is aimed at protecting the initial value
of the financial investment over a time horizon of five years and,
at the same time, obtaining possible gains by limited exposure to
the equity markets. For this purpose, the product involves a low-
risk component, which is mainly invested in monetary assets, and a
high-risk component, which is allocated in risky assets (like equity
indexes) with a volatility of around 30%. At any given time t, the
percentage allocation of the total value of the product between these
two asset classes is determined by a specific algorithm that depends
on the value of a cushion defined as follows

Cushiont = NAVt Floort

where NAVt denotes the net asset value of the portfolio at time t and
Floort denotes the value at time t of the capital to be protected at the
five-year maturity.
The exposure in the risky asset is variable over time according to
the following relationship

Et = M Cushiont

286

i i

i i
i i

minenna 2011/9/5 13:53 page 287 #319


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.11 Product information sheet for a return-target non-equity


product: a five-year VPPI structure

where M is a multiplicative factor that depends on the values of the


cushion at time t 1 according to the following scheme



1 if Cushiont1 < 5

M = 2 if 5  Cushiont1 < 20


3 if Cushion  20
t1

The product is charged with an entry fee of 1.8% on the notional


amount and with an annual ongoing charge of 1% that is taken daily
from its value. No exit charges are applied.
Figure 5.11 illustrates the product information sheet for this VPPI
structure, which gives a brief overview of the product followed by
the representation of its the riskreturn profile according to the three
pillars of the risk-based approach.

287

i i

i i
i i

minenna 2011/9/5 13:53 page 288 #320


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 5.12 Partition of the risk-neutral density of the VPPI product


with respect to the point of zero return and to the two fixed positive
thresholds identified with the superimposition technique

5.0
4.5
Density of the product
4.0 Density of the risk-free asset
3.5
3.0
104

2.5
2.0
1.5
1.0
0.5
0
0 100 120 140 160 180 200 220 240 260 280 300

The values reported in the financial investment table (including


the fair value and its decomposition in the risky and the risk-free
components) are calculated according to the methodology described
in Sections 2.2 and 2.3.1 of Chapter 2.
Figure 5.12 compares the risk-neutral densities of the final values
of the product and the risk-free asset.
The table of probabilistic scenarios as it appears in the product
information sheet of Figure 5.11 is obtained by applying to the above
densities the superimposition technique described in Section 2.3.3.
The significant probability of experiencing a negative return at matu-
rity (almost 37%) must be read together with the high conditional
mean value of this scenario; in fact, this mean value results close to
the 97% of the initial investment, hence showing that, overall, the
protection mechanism is working in a reasonable way and that the
conditional expected loss at expiry is mainly due to the regime of
costs of the product. Moreover, should be noted that, despite the
high probability of the product being in line with the final value
of the risk-free asset, the performances attainable in this scenario
are not so appealing and are likely to be close to the lower threshold
used to define the scenario itself. This consideration arises naturally
by seeing that, conditional on this macro-event, the final value of the
investment is on average around 107, a performance quite inferior
to that offered (always on average) by the risk-free asset (around

288

i i

i i
i i

minenna 2011/9/5 13:53 page 289 #321


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.13 Trajectories of the five-year VPPI product

180
170
160
150
VPPI value

140
130
120
110
100
90
80
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)

114). The probability table also offers a clear hint about the chance
of the product performing very well: the scenario higher than the
final value of the risk-free asset shows a non-negligible probability,
complemented by a very high mean value. In these cases the prod-
uct is exploiting the favourable conditions on the equity markets,
increasing the portfolio exposure to the risky component.
Figure 5.13 illustrates some simulated trajectories of the VPPI
product over the implicit time horizon of five years.
This figure presents two different behaviours for the possible tra-
jectories: a first set of trajectories is characterised by low values for
the product and low volatilities; in these occurrences, the equity
markets perform poorly and so the managing algorithm switches to
relatively safe solutions of asset allocation. The second set of trajec-
tories shows high values for the portfolio and high volatilities that
correspond to a switch to risky positions performed by the managing
algorithm.
The degree of risk is determined according to the methodology
described in Section 3.6, that is, by taking the expected value of the
annualised volatilities of the daily returns of each simulated tra-
jectory and then comparing this number with the optimal grid of
Figure 3.1. The average annualised volatility obtained in this way
is around 5.6% and it therefore indicates that the risk class of the
product is mediumhigh. The recommended investment time hori-
zon is that implicit in the protection mechanism and it is therefore
represented by the contractual maturity of five years.

289

i i

i i
i i

minenna 2011/9/5 13:53 page 290 #322


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

5.5 RETURN-TARGET PRODUCTS: THE CASE OF


AN INDEX-LINKED CERTIFICATE
The non-equity product considered in this section belongs to the
class of return-target structures. It is a five-year index-linked cer-
tificate characterised by a complex financial engineering that makes
intensive use of diverse derivatives components. The product pays
a stream of semiannual coupons whose size depends on the per-
formances of an equity index, namely the EuroStoxx-600. At any
payment date before maturity the performance of the index is mon-
itored: if the value of the index is greater than 50% of its value at the
issue date, the investor receives a fixed coupon equal to 4% of the
notional amount; otherwise the investor receives nothing. Moreover,
at maturity the investor may receive

1. a percentage of the notional amount subscribed that is equal


to the total performance of the index, if this latter performance
is below 50% of its value at the issue date,
2. the entire notional amount subscribed, if the value of the index
is above 50% of its value at the issue date, but less than twice
this value,
3. the entire notional amount subscribed, plus a bonus coupon
of 10%, if the value of the index is more than twice its value at
the issue date.

No early redemption is admitted.


Figure 5.14 illustrates the product information sheet for this cer-
tificate, which gives a brief overview of the product followed by the
representation of its the riskreturn profile according to the three
pillars of the risk-based approach.
The values reported in the financial investment table (including
the fair value and its decomposition into risky and risk-free com-
ponents) are calculated according to the methodology described in
Sections 2.2 and 2.3.1.
Figure 5.15 compares the risk-neutral densities of the final values
of the product and the risk-free asset.
The table of probabilistic scenarios as it appears in the prod-
uct information sheet in Figure 5.14 is obtained by applying the
superimposition technique described in Section 2.3.3 to the above
densities. The probability of a negative return, supplemented with

290

i i

i i
i i

minenna 2011/9/5 13:53 page 291 #323


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.14 Product information sheet for a return-target non-equity


product: a five-year index-linked certificate

information of a mean value of 49.1, quantifies the risk of a capi-


tal loss connected with the unfavourable evolution of the value of
the EuroStoxx-600 as described in point 1, above. The scenario in
line with the final value of the risk-free asset is mainly related to the
realisation of the event described in point 2 and depicts a situation in
which the investor completely recovers the issue price paid and also
receives some additional coupons; the conditional mean value of
this scenario is higher than the average performance of the risk-free
asset (120.9 versus 114.1), providing an interesting insight into the
inner workings of the financial engineering that cannot otherwise
be appreciated. The probability of being above the upper threshold

291

i i

i i
i i

minenna 2011/9/5 13:53 page 292 #324


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 5.15 Partition of the risk-neutral density of the index-linked


certificate product with respect to the point of zero return and to the two
fixed positive thresholds identified with the superimposition technique

15
Density of the risk-free asset
Density of the product

10
104

0
0 20 40 60 80 100 120 140 160

Figure 5.16 Trajectories of a five-year index-linked certificate

140

120

100
Certificate value

80

60

40

20

0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
t (years)

of the risk-free asset and the associated mean value mainly reflects
the potential performances of the product when the event described
in point 3 occurs.
Figure 5.16 illustrates some simulated trajectories of the index-
linked certificate over the implicit time horizon of five years.
From a first-level analysis, it is easy to observe that the volatility
of the simulated trajectories tends to decrease over time since the
uncertainty about the possible payoffs naturally diminishes as time

292

i i

i i
i i

minenna 2011/9/5 13:53 page 293 #325


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.17 Evolution of the interest rates over the period January
2008April 2011

2
6-month Euribor
1 20-year IRS
Fixed rate
0
Jan Jan Jan Jan
2008 2009 2010 2011

Source: Datastream.

passes. In particular, it is possible to identify three different patterns


for the simulated trajectories, corresponding to, respectively, a full
capital redemption, the capital redemption plus an additional 10%
return and a last regime characterised by low values of the certificate
and high volatilities, which is connected with the partial redemption
of the notional amount.
The degree of risk is determined according to the methodology
described in Section 3.6, that is, by taking the expected value of the
annualised volatilities of the daily returns of each of the simulated
trajectories and then comparing the number thus calculated with
the optimal grid in Figure 3.1. The average annualised volatility
obtained in this way is around 13.04% and it therefore indicates that
the risk class of the product is high. The recommended investment
time horizon is the one implicit in the protection mechanism and it
is therefore represented by a contractual maturity of five years.

5.6 NON-EQUITY EXCHANGE STRUCTURES: THE CASE OF


A COLLAR REPLACING A FIXED-RATE LIABILITY
The non-equity exchange structure considered in this section is rep-
resented by an interest rate derivative, specifically an interest rate
collar, that modifies the cashflow profile of a pre-existing fixed-rate
liability. The original liability was signed in April 2008 with a 20-year

293

i i

i i
i i

minenna 2011/9/5 13:53 page 294 #326


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Figure 5.18 Summary of the characteristics of the old fixed-rate liability


and the new structured liability embedding the collar

maturity, ending in April 2028. The initial notional amount is equal


to 1,000,000 which is refunded according to a French amortisation
plan with constant annual instalments of 87,893. The fixed interest
rate to be paid annually on the outstanding debt is equal to 6.1%,
which corresponds to the 20-year swap rate (ie, 4.8%) observed at
the date of signing, plus a spread of 130bp.
In September 2008, interest rates begin to decline, as shown in
Figure 5.17, thereby increasing the opportunity cost of this financial
liability.
Consequently, the debtor considers the possibility of a restructur-
ing of their liabilities in order to reduce the associated costs.
In April 2011, the debtor receives a proposal of exchanging the
outstanding debt from fixed to floating, the latter being limited to
within a well-specified range. In particular, the proposal consists in
replacing the fixed-rate cashflows with those arising from an interest
rate collar that is indexed to the six-month Euribor plus a spread of
240bp: if the resulting rate is below 4.5%, the debtor will have to make
a payment indexed to this minimum rate, while if the resulting rate

294

i i

i i
i i

minenna 2011/9/5 13:53 page 295 #327


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Figure 5.19 Density of the percentage variations in the funding costs


associated with the switch from the old liability to the new structured
liability embedding the collar

20,000
17,500
15,000
12,500
10,000
7,500
5,000
2,500
0
15 10 5 0 5 10 15 20
%

is above 8.5%, the debtor will have to pay a cashflow indexed only
to this maximum rate.
The characteristics of the original liability (ie, the old structure
according to the logic of Section 2.3.4) and of the restructured liabil-
ity with the collar (ie, the new structure according to the logic just
recalled) are summarised in Figure 5.18.
To understand whether the transition from the original structure
of fixed-rate payments to the new structure provided by the collar is
effectively able to improve the funding conditions of the debtor, the
two alternative financial positions are compared from a probabilistic
point of view. In particular, the trajectory-by-trajectory technique
described in Section 2.3.4 is used to obtain the risk-neutral density
of the percentage variations in the funding costs associated with
the switch from the old liability to the new one of this example.
Figure 5.19 shows the shape of this density.
The sign of this density is crucial in order to divide it into two
complementary events which can be defined as follows.

1. The funding costs associated with the new liability are lower
than those associated with the old liability.

2. The funding costs associated with the new liability are higher
than those associated with the old liability.

295

i i

i i
i i

minenna 2011/9/5 13:53 page 296 #328


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

Table 5.1 Table of probabilistic scenarios with conditional means for the
exchange of the old fixed-rate liability with the new structured liability
embedding the collar

Average
variation of the
Probabilities funding costs
Scenarios (%) (%)

The funding costs associated with 36.4 3.03


the new liability are lower than those
associated with the old liability
The funding costs associated with 63.6 3.76
the new liability are higher than those
associated with the old liability

The probability table that indicates the probabilities and the con-
ditional mean values associated with the two listed scenarios is
reported in Table 5.1.
This table shows that surrendering the original fixed-rate liability
in favour of the new liability (restructured according to the inter-
est rate collar) improves the financial position of the debtor with a
probability of 36.4%, and, in these cases it leads to an average reduc-
tion of the funding costs of around 3.03%. In the remaining 63.6% of
cases it would be better to remain in the old liability, since entering
the collar would increase, on average, the funding costs by around
3.76% with respect to the payment of cashflows indexed to the fixed
rate of 6.1%.
Overall, the probabilistic comparison signals therefore that the
proposed collar is not sufficiently likely to reduce the funding costs
of the debtor in order to justify the transition to the new liability.
Technically, considering the interest rate curve and the correspond-
ing volatility surface observed on the market in April 2011, the num-
bers inside Table 5.1 indicate that, given the residual life of the debt
and the amortisation plan used, the collar proposed has a pair of min-
imum and maximum rates (ie, 4.5% and 8.5%, respectively), which,
due to the spread of 240bp, do not sufficiently protect the debtor from
unfavourable movements of the floating-rate applied (the six-month
Euribor).
The message conveyed by the table of probabilistic scenarios is
supplemented by the additional indication of the initial theoretical
value of the collar, which is equal to 3,943. This positive theoretical

296

i i

i i
i i

minenna 2011/9/5 13:53 page 297 #329


i i

SOME APPLICATIONS OF THE RISK-BASED APPROACH

Table 5.2 Conditional values on the tails for the exchange of the old
fixed-rate liability with the new structured liability embedding the collar

Conditional
values (%)

Maximum reduction of the funding costs 8.9


associated with the exchange
under extreme favourable conditions
Maximum increase of the funding costs 9.5
associated with the exchange
under extreme unfavourable conditions

value represents the amount that, on average and on an outstanding


debt of 914,299, the debtor should receive to be compensated for
the inefficient structure of the new liability with respect to the old
one. Moreover, in order to have an idea of the order of magnitude
of the differential costs associated with the collar under extremely
unfavourable and favourable conditions, the table of the conditional
values on the tails provided useful additional information, as shown
in Table 5.2.
This table gives the debtor a clear indication of the maximum ben-
efits and disadvantages which would arise from replacing their orig-
inal liability with the one modified by the proposed collar. Specifi-
cally, if things go very well, the collar will allow the debtor to reduce,
on average, the costs of their liability of 8.9%, while in the case of very
unfavourable dynamics of the interest rates, it would increase the
costs for the debtor by 9.5% on average with respect to the fixed-rate
liability. It also follows that the information obtained from studying
the tails of the risk-neutral density of the percentage variations in
the funding costs associated with the exchange proposed confirms
that, compared with the fixed-rate debt, the collar features an overall
increase in the risks and, hence, also in the potential costs incurred
by the debtor.

1 A (long) interest rate collar is an interest rate derivative where the holder makes payments
indexed to a floating interest rate, but if the reference floating rate is below (above) a deter-
mined minimum (maximum) rate they cannot pay less (more) than the agreed minimum
(maximum). Therefore, it is a derivative that offers protection against an increase in the float-
ing interest rate above a certain upper threshold and, at the same time, if the floating rate
falls below a specific lower threshold, it requires the holder to pay a cashflow calculated on
this lower threshold. Technically, it is obtained (Hull 2009) from the combination of a long
position in an interest rate cap (whose strike rate determines the upper threshold) and a short
position in an interest rate floor (whose strike rate determines the lower threshold).

297

i i

i i
i i

minenna 2011/9/5 13:53 page 298 #330


i i

i i

i i
i i

minenna 2011/9/5 13:53 page 299 #331


i i

Conclusions

The literature is full of important contributions that apply the


quantitative methods of probability theory and stochastic calcu-
lus to the pricing of financial products and the measurement and
management of their risks by professional operators.
Using similar quantitative methods, this book defines a risk-
based approach that constitutes an objective and internally con-
sistent framework for addressing these issues and identifying an
information set that might be useful to non-professional operators,
primarily retail investors who buy non-equity products.
Retail investors make their selection from the different products
available on the market using a decisional paradigm that is based on
their liquidity attitude, their risk appetite, their budget constraints
and their objectives of performance.
In light of this paradigm, the information needs of the average
investor are related to the knowledge of the time horizon of the
product, its riskiness over this horizon and, finally, of its initial value
and potential performance.
This book aims to meet these information needs by adopting three
synthetic indicators (the three pillars ) that are complementary and
interdependent.
The first pillar, with the financial investment table, shows the
fair value of the product and, through comparison with the issue
price, quantifies the total costs incurred over the time horizon of the
investment.
The numbers shown in this table are determined in accordance
with standard pricing techniques, quite similar to those used in
market transactions between institutional operators. The main inno-
vation of this table is the recognition that the same information is
equally valuable for retail investors, especially in order to usefully
compare different products at their disposal. In fact, without the dis-
tinction between costs and fair value, the investor would make this

299

i i

i i
i i

minenna 2011/9/5 13:53 page 300 #332


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

comparison on the basis of minimum (and sometimes even mislead-


ing) information, because products that have the same issue price are
not necessarily equivalent or, among products with different prices,
the cheapest is not necessarily the one with the lowest price.
For simpler and less risky products, the indication of fair value,
flanked by deterministic return indicators, is sufficient to meet
the information needs of investors. However, in the case of non-
elementary return-target products, the only knowledge of fair value
is too synthetic and omits important aspects related to the specific
choices of financial structuring and, above all, to the variability of
the potential performances.
For this reason, in the case of non-elementary products, the finan-
cial investment table is further detailed by disaggregating the fair
value into its risky and risk-free components. This decomposition
does not bother to list the various components wrapped together in
the product (bond-like component and one or more derivative com-
ponents), but it isolates the part of the fair value that comes from the
bet inherent in the risks of the investment from the part that instead
is similar to a risk-free investment (that is, the part which, in line
with the principle of risk-neutrality, is only exposed to movements
in the curve of the interest rates).
To complete the financial investment table, for non-elementary
products, the first pillar includes the table of probabilistic scenarios
that gives a synthetic quantification of the performance risk of the
product with reference to the final moment of the investment time
horizon. The probability table is determined, exactly like the fair
value, starting from the risk-neutral density of the final values of the
product, but through appropriate techniques of reduction in granu-
larity that mitigate the model risk it extracts more information from
the risk-neutral density. In fact, it indicates both the likelihood of
experiencing a loss and its average size, and, thanks to the superim-
position with a suitable numraire (the risk-free asset), it also shows
the extent to which the product performs in a range that is compatible
with, lower or greater than the risk-free asset. In this way, investors
can easily verify whether the level of the performances offered by
the product and their variability meet their personal performance
objectives.
In a risk-neutral environment, the risk-free asset is the natural
numraire against which to compare all the products being offered

300

i i

i i
i i

minenna 2011/9/5 13:53 page 301 #333


i i

CONCLUSIONS

singularly, ie, outside any hypothesis of exchange with another


product which is already in the investors portfolio. Conversely, if
the product is proposed for a possible exchange with a product
already held by the investor, the latter is clearly the only relevant
investment alternative. In these cases, the probability table com-
pares the densities of the two competing products, showing when
and how much, on average, one performs better than the other. A
similar methodology can highlight the opportunity to replace an
existing financial liability with a new debt which transforms the
structure of the original cashflows through the insertion of derivative
components.
The second pillar of the risk-based approach supplements the
information offered by the financial investment table and by the
table of probabilistic scenarios by explicitly displaying the degree of
risk that characterises the product throughout the entire investment
time horizon.
In fact, while the two tables of the first pillar refer to the first
and the last moments of the products life, the degree of risk sum-
marises the temporal evolution of the risk and, hence, of the variabil-
ity of the products returns over the entire period between these two
reference dates. By looking at this indicator, investors can immedi-
ately see whether the riskiness of the product is consistent with their
risk appetite.
The metric used to measure the riskiness of the product is the
annualised volatility of its daily returns. Comparing the value
assumed by this metric with a grid of seven volatility intervals,
each mapped to a qualitative risk class through clear and unequiv-
ocal adjectives, the degree of risk of the product can immediately
be identified.
The most interesting aspect of the methodology developed inside
the second pillar is the use of Garch diffusions for the calibra-
tion of the optimal grid of volatility intervals. In fact, by exploit-
ing the inherent flexibility of estimates relying on the continuous
limit of discrete Garch models, it is ensured that all intervals in the
grid share the same high statistical significance, since ex ante they
have equal (and small) chances to be violated. So if the product
is claimed to belong to a specific interval of the grid, its volatil-
ity must spend most of the time between the bounds of the inter-
val itself, otherwise some change in its risk profile has clearly

301

i i

i i
i i

minenna 2011/9/5 13:53 page 302 #334


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

occurred. In this way, by setting proper triggering rules, it becomes


easy to immediately capture the alternation of different volatility
regimes and also to promptly detect migrations between different
risk classes, hence ensuring the continuing meaningfulness of the
information about the degree of risk along the entire time horizon
of the product.
The third pillar of the risk-based approach is the recommended
investment time horizon. By expressing a recommendation on the
holding period of the product, this indicator helps investors to verify
whether this period is compatible with their liquidity attitude.
The determination of the recommended time horizon is affected
by the financial structure of the product. Return-target products or
products assisted by financial guarantees have an implicit time hori-
zon which unambiguously identifies the period for the assessment
of the degree of risk and the two reference dates to obtain the risk-
neutral density behind the financial investment table and the prob-
ability table. Risk-target and benchmark products have non-closed
structures; in the absence of an implicit time horizon, the third pil-
lar is therefore determined by choosing a prudential criterion of the
minimum time for the full amortisation of both initial and running
costs and by defining a robust methodology to give a suitable prob-
abilistic characterisation to such a criterion. This methodology relies
on the properties of the cumulative distribution function of the first
times that the value of the product hits the cost-recovery barrier.
In fact, detailed analysis of this function reveals that, under some
well-posed conditions, it is always possible to find a minimum rec-
ommended time horizon that ensures, both locally and globally, a
correct ordering of products with different riskiness measured, as
for the second pillar, in terms of volatility. With some simplification,
this methodology can also be usefully extended in order to deter-
mine a minimum time horizon that could supplement the implicit
one for return-target products endowed with liquidity features or
liquidability enhancements.
The risk-based approach gives a rigorous and objective founda-
tion to the representation of the risk-reward profile of any non-equity
product. Even in the case of very complex structures, it looks into
the technicalities of the products design, captures the embedded
risks and translates these risks into a clear grammar that, thanks
to the common denominator of the three pillars, allows investors

302

i i

i i
i i

minenna 2011/9/5 13:53 page 303 #335


i i

CONCLUSIONS

to understand and compare different products, and, hence, to take


more enlightened investment decisions.
For these reasons, this book represents an original and valu-
able opportunity to rethink the entire field of information given
to investors with regard to non-equity products. As mentioned in
Chapter 1, a possibility could be the use of the three pillars, as deter-
mined by the issuer in quality of products engineer, to prepare a
short product information sheet to be delivered as a guideline on
the proposed investment.
In this way, the relevant part of the information asymmetry that
penalises retail investors would definitely be removed. Moreover, by
acquiring the contents of the product information sheet, supervisors
could improve the quality of their interaction with issuers regarding
the coherence and the comprehensibility of the products represen-
tation, and with distributors for the determination of more efficient
procedures to map the products and, consequently, to better meet
the needs of their clients.

303

i i

i i
i i

minenna 2011/9/5 13:53 page 304 #336


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

REFERENCES

Abramowitz, M., and I. Stegun, 1964, Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables, National Bureau of Standards Applied Mathematics Series,
Number 55 (Washington, DC: US Department of Commerce).

Albanese, C., and G. Campolieti, 2006, Advanced Derivatives Pricing and Risk Management
(Elsevier).

Bjrk, T., 2009, Arbitrage Theory in Continuous Time (Oxford University Press).

Cox, J. C., J. E. Ingersoll and S. A. Ross, 1985, A Theory of the Term Structure of Interest
Rates, Econometrica 53, pp. 385407.

Duan, J., 1997, Augmented GARCH(p, q) Process and Its Diffusion Limit, Journal of
Econometrics 79, pp. 97127.

Glasserman, P., 2004, Monte Carlo Methods in Financial Engineering (Springer).

Harrison, J. M., and S. R. Pliska, 1981, Martingales and Stochastic Integrals in the Theory
of Continuous Trading, Stochastic Processes and Their Applications 11(3), pp. 215260.

Heath, D., R. A. Jarrow, and A. Morton, 1992, Bond Pricing and the Term Structure of
Interest Rates: A New Methodology for Contingent Claims Valuation, Econometrica 60(1),
pp. 77105.

Heston, S. L., 1993, A Closed-Form Solution for Options with Stochastic Volatility with
Applications to Bond and Currency Options, The Review of Financial Studies 6(2), pp. 327
343.

Hull, J., 2009, Options, Futures and Other Derivatives, Seventh Edition (Englewood Cliffs,
NJ: Prentice Hall).

Hull, J., and A. White, 1990, Pricing Interest-Rate Derivative Securities, The Review of
Financial Studies 3(4), pp. 573592.

Karatzas, I., and S. E. Shreve, 2005, Brownian Motion and Stochastic Calculus, Second Edition
(Springer).

Geweke, J., 1986, Modelling the Persistence of Conditional Variances: A Comment,


Econometric Review 5, pp. 5761.

Markowitz, H. M., 1959, Portfolio Selection: Efficient Diversification of Investments (New York:
John Wiley & Sons).

Merton, R. C., 1974, On the Pricing of Corporate Debt: The Risk Structure of Interest
Rates, Journal of Finance 29(2), pp. 449470.

Mihoj, A., 1987, A Multiplicative Parameterisation of ARCH Models, Working Paper,


Department of Statistics, University of Copenhagen.

Minenna, M., 2003, The Detection of Market Abuse on Financial Markets: A Quantitative
Approach, Quaderno di Finanza, Volume 54 (Rome: Commissione Nazionale per le Societ
e la Borsa).

Minenna, M., 2006, A Guide to Quantitative Finance (London: Risk Books).

Minenna, M., G. M. Boi, A. Russo, P. Verzella, and A. Oliva, 2009, A Quantitative Risk-
Based Approach to the Transparency of Non-Equity Investment Products, Quaderno di Finanza,
Volume 63 (Rome: Commissione Nazionale per le Societ e la Borsa).

Nelson, D., 1990, ARCH Models as Diffusion Approximations, Journal of Econometrics


45, pp. 738.

304

i i

i i
i i

minenna 2011/9/5 13:53 page 305 #337


i i

CONCLUSIONS

ksendal, B., 2010, Stochastic Differential Equations: An Introduction with Applications


(Springer).

Pantula, S., 1986, Modelling the Persistence of Conditional Variances: A Comment,


Econometric Review 5, pp. 7174.

Revuz, D., and M. Yor, 1999, Continuous Martingales and Brownian Motion, Third Edition
(Springer).

Stroock, D. W., and S. R. S. Varadhan, 1979, Multidimensional Diffusion Processes (Springer).

305

i i

i i
i i

minenna 2011/9/5 13:53 page 306 #338


i i

i i

i i
i i

minenna 2011/9/5 13:53 page 307 #339


i i

Index

(page numbers in italic type relate to tables or figures)

A D

adaptivity, 91, 122, 151 degree of risk, 4, 7, 10, 17, 75,


American options, 19 77160, 261
automatic asset manager, 812 and migrations of, 1506, 154
model for, 836, 86 and optimal grid,
methodology to calibrate,
B 803
and risk classification, 14750
benchmark products, 6, 10, 18, and volatility, 78
148, 162, 27881, 279, 280, model to simulate, 879, 89
281 and parameter estimation,
minimum time horizon for, 10724
163263 and prediction intervals, 4,
see also risk-target products 1037, 157
Bermudan options, 19 predictive model for,
Brownian motion, 84, 87, 90, 98, 89124, 123
163 diffusive Garch, xvi, 123
geometric, 18, 1634, 178, 185, Dirac delta effect, 689
220, 271
product-specific, 187 E
standard, 17887
elementary products, 53
C elementary bonds, 20
first pillar and, 6773
callable bonds, 19 Euler scheme, 134, 220
conditional values on the tails,
F
645, 66, 297
constant proportion portfolio financial investment table, 23,
insurance (CPPI), 20, 56 1314, 15, 20, 25, 38, 49, 67,
contingent-claim evaluation 73, 161, 270, 276, 282, 284,
theory, 20, 32 299302 passim
correct sorting, 165, 233 increasing the detail of, 2630
global, 237 risky component, 3, 14, 25,
local, 233 28, 30, 68, 289
cost-recovery event, 38 risk-free component, 6, 14,
strong characterisation of, 25, 28, 30, 734, 282, 288,
16673 290
weak characterisation of, price unbundling via, 204
1738 first-passage times, 5, 165, 174
cumulative probability of cumulative probability
first-passage times, closed distribution of, 163
formula for, 17887 closed formula for, 17887

i i

i i
i i

minenna 2011/9/5 13:53 page 308 #340


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

and drift effect, 200 model risk, 33, 300


and volatility effect, 201 Monte Carlo simulation, 17, 19,
fixed-coupon bonds, 26, 50, 70, 22, 177, 252, 256, 259
701, 149, 150
floating-coupon bonds, 27, 30, N
31, 31, 33, 50, 70, 71 non-elementary products, 14, 24
G and first pillar, 2467
non-equity exchange structures:
Girsanovs Theorem, 182 case of a collar replacing a
fixed-rate liability, 2937,
I 294, 295, 296, 297
probabilistic scenarios for,
illiquid products, 22, 2645 5367, 55, 57, 58, 60, 61, 62,
interest-rate collar, 276, 2937, 63, 64, 65
294, 295, 296, 297 non-equity products, xiv, xvii, 1,
internal rate of return, 1516, 13, 77, 161, 275
6970 risk-neutral density of, 1620,
31, 31, 33, 39, 40, 42, 46, 51
L
O
liquidability, 5, 163, 263
enhancement, 223, 264, 266, optimal grid, 4, 17, 79, 80, 147,
270, 282, 284, 302 154, 155
liquidity, 5, 264 and full space of volatilities,
attitude, 12, 7, 162, 299, 302 1447
features, 265, 270, 302 and heterogeneity, 156
and homogeneity, 130, 146,
M
156
management failures, 4, 82, 83, and management failures,
901, 151, 153, 1578 12447
definition of, 12631 methodology to calibrate,
and relative widths, 1319 803
market feasibility, 812, 83, 86, and non-abnormality, 130,
90, 125, 128, 129, 157 145, 150, 157
Markets in Financial and reduced space of
Instruments Directive volatilities, 13944
(MIFID), xix, xx option-based portfolio insurance
Markov chain, 92, 93, 94, 97 (OBPI), 20, 24
discrete, holding time of, 96 P
discrete, jump time of, 96
M-Garch(1,1), 92 packaged retail investment
diffusion limit of, 110124 products (PRIPs), xxxxi
distributive properties of, payoffs diagrams, 1516
1037 performance risk, 3, 7, 20, 35, 45,
parameter estimation, 107 53, 61, 70, 745, 158, 300
migration, 7780 passim, 83 portfolio replication principle, 3,
detecting, 1506, 302 14, 27, 74
minimum times, 177, 237, 23747 price unbundling, 3, 9

308

i i

i i
i i

minenna 2011/9/5 13:53 page 309 #341


i i

INDEX

and elementary products, and technical remarks


6773 concerning, 259
via financial investment weak characterisation of,
table, 204 1738
and non-elementary reduction in granularity, 3, 34,
products, 2467 37, 46, 61, 300
table of, 3053 regulation of retail investment
and probabilistic scenarios, product, evolution in, xxv
1376 first phase of, xix
product information sheet, 89, second phase of, xx
10, 276, 277, 279, 280, 282, third phase of, xx
283, 287, 288, 290, 291, 303 replicating portfolio, 21
puttable bonds, 19 return-target products, 6, 15, 18,
22, 27, 28, 67, 149, 161, 276,
R
28190, 302
RadonNikodm, 178, 182 elementary 74
recommended investment time and index-linked certificate,
horizon, 45, 7, 73, 77, 2903
161273 information sheets for, 283,
and asymptotic analysis, 287, 291
18796 non-elementary, 27, 28, 30, 34,
and cost-recovery event, 38 35, 53, 67
cumulative probability non-equity, 32
distribution of, 163 and plain-vanilla bond with
closed formula for, 17887 significant credit risk,
and extensions to general 2816
dynamics for the process recommended time horizon
{St }t0 , 2529 for, 2639
and first-passage times, 5, and variable-proportion
165, 174 portfolio insurance (VPPI)
and function of minimum 2869, 288, 289
times, 177 risk-based approach, 1, 23, 4, 8,
and global correct ordering, 9, 10
2489 applications of, 27597
implicit, 161, 263, 2667, 269, benchmark product,
270, 284, 302 27881, 279, 280, 281
and local correct ordering, non-equity exchange
2337 structures: the case of a
minimum, 163263 collar replacing a
for return-target products, fixed-rate liability, 2937,
2639 294, 295, 296, 297
for risk-target and benchmark plain-vanilla bond with
products, 163263 significant credit risk,
and sensitivity analysis, 196; 2816
see also sensitivity analysis return-target products:
strong characterisation of, case of an index-linked
16673 certificate, 2903, 291, 292

309

i i

i i
i i

minenna 2011/9/5 13:53 page 310 #342


i i

A QUANTITATIVE FRAMEWORK TO ASSESS THE RISKREWARD PROFILE OF NON-EQUITY PRODUCTS

risk-target product, 2778, sequential filtering, 1, 7


278 Skorokhod space, 95
variable-proportion strong Markov property, 179
portfolio insurance (VPPI) structured liability, 66, 276, 294,
28690, 287, 288, 289 295, 296, 297
first pillar of, 1376 superimposition technique, 35,
second pillar of, 77160 37, 42, 44, 49 51, 66, 282,
third pillar of, 161273 284, 288, 288, 292
risk-free asset, 15, 21, 22, 26, 32,
357 passim, 3945 passim, T
478, 4954, 276, 2823,
table of probabilistic scenarios,
2889, 300
3, 7, 15, 16, 25 3053,
density of 41, 42
4453, 57, 60, 61, 67, 72, 74,
risk-free component, 6, 14, 25,
161, 270, 282, 288, 290, 296,
28, 30, 734, 282, 288, 290
2967, 300, 301
risk-neutral numraire, 21, 35,
macro-events considered by,
401, 44, 54, 74, 300
43
risk-neutrality, principle of, 153,
methodology to build, 3853
258, 300
trajectory-by-trajectory
and no-arbitrage principle, 16
technique, 29, 37, 54, 66, 74
and risk-neutral measure, 13,
16, 20, 212, 26 V
risk-target products, 10, 18, 83,
258, 275, 2778, 278 variable-proportion portfolio
minimum time horizon for, insurance (VPPI), 275, 276
163263, 279 and return-target products,
see also benchmark products 28690
risky component, 3, 14, 25, 28, volatility/volatilities, 78
30, 68, 289 full space of, and optimal
grid, 1447
S model to simulate, 879, 89
predictive model for, 89124
Second Fundamental Theorem
and parameter estimation,
of Asset Pricing, 21
107
sensitivity analysis, 196232,
and prediction intervals, 4,
197, 198, 199, 200, 202
1037, 157
and first-order partial
reduced space of, and
derivatives, 20413
optimal grid, 13944
concerning volatility, limit
representations of, 21326 W
and second-order partial
derivatives, 22632 what-if scenario, xx, 1516

310

i i

i i

Anda mungkin juga menyukai