Anda di halaman 1dari 378

The PRM Handbook – III.A.

1 Market Risk Management

III.A.1 Market Risk Management


Jacques Pézier1

III.A.1.1 Introduction
What is market risk and whom does it concern? What do we mean by market risk management,
and what does a market risk manager do in a day at the office? These are not theoretical
questions with only right or wrong answers, they are practical questions that every financial as
well as non-financial firm must grapple with; what appear to be reasonable answers depends very
much on the activity, environment, culture, objectives and organisation of each firm.

In this chapter students are introduced to the four major tasks of risk management applied to
market risks, namely, the identification, assessment, monitoring and control/mitigation of market
risks. The difficulties faced in carrying out these tasks vary according to businesses. We
therefore examine three typical activities – fund management, banking and manufacturing – to
illustrate a broad spectrum of problems and state-of-the-art approaches. We aim to develop a
conceptual and largely qualitative understanding of the topic. More detailed quantitative analyses
are given in subsequent chapters.

III.A.1.2 Market Risk


To facilitate the analysis and understanding of risks faced by financial firms it is common practice
to classify them into major types according to their main causes. Thus, banking risks are typically
classified as being either market, credit or operational in origin. Broadly speaking, market risk
refers to changes in the value of financial instruments or contracts held by a firm due to
unpredictable fluctuations in prices of traded assets and commodities as well as fluctuations in
interest and exchange rates and other market indices.2 It is not clear to what extent market risks
should be or can be considered for less liquid assets such as real estate or banking loans.
Accountants usually shy away from attributing ‘fair values’ to such assets in the absence of
reliable and objective market values, and consequently market risks are difficult to assess on
illiquid assets. However, when the core activity of a business is to hold portfolios of illiquid
assets, it would be dangerous to ignore their potential change in value.

1 Visiting Professor, ISMA Centre, University of Reading, UK.


2 By contrast, credit risk refers to changes in value of assets due to changes in the creditworthiness of an obligor and, at
the limit, losses due an obligor failing to meet its commitments; operational risk is defined as the risk of loss resulting
from inadequate or failed internal processes, people and systems or from external events.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 1
The PRM Handbook – III.A.1 Market Risk Management

Banking supervisors have taken a special interest in codifying risks and in setting standards for
their assessment. Their purpose is essentially prudential: to strengthen the soundness and
stability of the international banking system whilst preserving fair competition among banks. To
this end they have designed a set of minimum regulatory capital requirements for all types of
traded assets, including some illiquid ones, based on detailed definitions and assessments of risk.3
By and large, banking supervisors have erred on the side of objectivity rather than
comprehensiveness when defining market risks. Banks must allocate their assets to either a
banking book or a trading book; market risks, bar exceptional circumstances, are recognised only
in the trading book. A trading book consists of positions in financial instruments and
commodities held either with a trading intent or to hedge other elements of the trading book; all
other assets (e.g. loans) must, by default, be placed in the banking book. To be able to receive
trading book capital treatment for eligible positions, some further basic requirements must be
met such as clearly documented trading policies, daily mark to market (or mark to model) of
positions and daily monitoring of position limits. To remain prudent, banking regulators have
always inflated the capital treatment of credit risks in the banking book to cover for hidden
market risks.

III.A.1.2.1 Why Is Market Risk Management Important?


Banking supervisors also hope that regulations will promote the adoption of stronger risk
management practices, which they view as a worthwhile goal.4 Working in collaboration with the
industry, they have certainly put these issues in the limelight and progress has been made.
However, the development and adoption of better risk management practices in banks remains
an objective ultimately beyond the reach of banking supervisors. It requires enlarging the
purpose of risk management from a purely prudential objective (setting a limit on insolvency
risks) to a broader economic objective (balancing risks and returns); it also requires enlarging the
scope of market risk assessment to those areas that have been largely ignored by regulators
because accrual accounting practices hide the risks and/or the risks are difficult to quantify
objectively.

Outside financial services there are no prudential regulations offering guidelines for the
management of market risk, but market risk nonetheless remains a major determinant in the
success or failure of most economic activities and the welfare of people in free market
economies. Suffice to observe how variations in the price of energy affect manufacturing and
transportation, changes in interest rates affect the cost of mortgages and thereby property prices,

3 See Basel Committee on Banking Supervision (BCBS, 1996, 2004a). Insurance companies are subject to different
solvency tests; harmonisation of insurance company solvency tests with minimum capital requirements for banks is a
long-term aim for regulators. Pension funds and other funds designed to meet strict liabilities are also subject to
solvency tests by the relevant regulatory authorities.
4 See BCBS (2004a, paragraphs 4 and 720).

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 2
The PRM Handbook – III.A.1 Market Risk Management

and the performance of securities markets affects pensions. The absence of prudential
regulations for non-financial firms gives an opportunity to reconsider the best way to recognise
and tackle market risks. And what should become clear is that, satisfying as it may be to
categorise risks according to causes, there is no classification system by which every risk would
fall into one causal category and one category only, nor any management system that could
control one type of risk without affecting others.

III.A.1.2.2 Distinguishing Market Risk from Other Risks


Some examples will illustrate how market, credit and operational risks are interrelated. On a
macro-economic scale, consider the technology bubble that burst at the turn of the millennium.
In 1996 Alan Greenspan, the Fed Chairman, described as ‘irrational exuberance’ the expectations
placed on new technologies; indeed, they did not perform as well or as rapidly as predicted – a
business or operational risk problem. A wave of corporate failures followed – a credit risk – and
the Fed as well as many other monetary authorities across the world reacted by lowering interest
rates – a market risk for bond portfolio holders. For an example on a micro-economic scale,
consider a firm exposed to exchange-rate fluctuations – a market risk. It may seek cover by
entering into a forward exchange-rate agreement with a bank, but it thereby takes a credit
exposure on the bank if the bank has to pay the firm under the contract. Or consider a bank that
makes a floating rate loan to a firm, thus taking primarily a credit risk on the firm; the bank may
seek some degree of credit cover by asking for securities or property to be placed as collateral,
but the value of the collateral will be subject to market risk.

Distinguishing market risks from other risks and managing them separately from and
independently of other risks and profit considerations is therefore only valid up to a point. In
any organisation, a balance must be struck between the degree of specialisation of risk
management functions, so that market risks can be distinguished from other risks and managed
separately, and the degree of interaction and coordination between functions so that they can
operate coherently.

III.A.1.3 Market Risk Management Tasks


The Basel Committee on Banking Supervision has given a great deal of thought to the role and
organisation of a risk management function. It distinguishes four main tasks that it defines as
identification, assessment, monitoring and control/mitigation of risks.5 These tasks are relevant
for market risks, as they are for most other risk types and for non-financial institutions as well as
for banks.

5 See publications on the New Basel Accord, in particular BCBS (2004a, paragraphs 725–745 and 663(a)).

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 3
The PRM Handbook – III.A.1 Market Risk Management

Identification is the necessary first step, but it may be less obvious than first thought. Real-world
problems do not come neatly defined as in textbooks; they have to be recognised and delineated.
Exposures to market risks can easily be overlooked because of either over-familiarity (risks we
have always lived with without doing anything about them) or, at the other extreme, ignorance of
new risks. The first case is all too common; for example, many corporates do not hedge currency
exposures because they are not sure how to assess them or how to hedge them. In the end, it
may be easier for a corporate treasurer to remain passive and blame the currency markets for a
loss than to be active and have to explain why a loss was made on a hedge, the two circumstances
having about equal probabilities. On the other hand, lack of familiarity may lead to over-cautious
reactions. For example, investing in foreign equity markets, even when they are denominated in
the same currency as the domestic market, is generally considered more risky than investing in
the domestic equity market. But not recognising a new risk or combination of risks may be the
greatest danger.6 The managers of the hedge fund LTCM, including two Nobel prize laureates in
economics and finance, knew almost everything that could be known about market risks, but
they were caught unawares by an unusual combination of events: the repercussions of the
Russian bond crises of August 1998, diminished liquidity in major bond markets because of the
withdrawal of a large market maker, and difficulty in raising new funds having just returned some
capital to shareholders.

Assessment is the second step. The word initially chosen by the BCBS was ‘measurement’, but it
has wisely been replaced by ‘assessment’ in more recent publications. Indeed, risks are not like
objects that can be measured objectively and accurately with a simple measuring tape. Risks are
about future unexpected gains or losses. The term ‘assessment’ reflects the need for a statistical
model, that is, a coherent and relevant set of assumptions and parameters, some being supported
by past evidence (e.g. former loss events) and others being chosen for the purpose of the exercise
(time horizon, trading strategy, etc.). Banking supervisors have set qualitative and quantitative
standards for the assessment of market risks to suit their aim, namely, the determination of
prudent minimum capital requirements for banks. But internally, banks and other firms should
take a wider view to choose standards that suit their own situation and objectives. In the next
sections we shall illustrate how different assessment standards may be suitable for different
businesses lines. But we shall not go into detailed quantitative techniques; these are covered in
the following chapters.

6 Readers may remember how the US Secretary of Defence, Donald Rumsfeld, was once derided for trying to explain
at a press conference that there are knowns and unknowns and that among the unknowns there known unknowns and
unknown unknowns, the latter being the most dangerous. This is in fact old military lore: in the US Navy unknown
unknowns are colloquially called ‘unk-unks’ and said to rhyme with sunk-sunk. Of course, Socrates had put this point
across more elegantly when he said: ‘Wisest is the man who knows what he does not know.’

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 4
The PRM Handbook – III.A.1 Market Risk Management

Monitoring refers to the updating and reporting of relevant information. Exposures and results can
be monitored. Risks themselves cannot be monitored, but they should be frequently reassessed.
Firms, strategies, markets, competition, technology and regulation all evolve, and therefore
market risks also change over time. Even in a steady-state situation more information can be
collected over time to develop a better understanding of market risks. Monitoring is particularly
important when hedging strategies are in place so that one can verify the efficiency of these
strategies and update the corresponding risk models.

Finally, control has been replaced by control/mitigation in the latest publications of the BCBS.
Control gives too much the impression that market risks are intrinsically bad and therefore must
be subject to limits, and a key responsibility of the risk manager is to verify that limits are not
exceeded or, if they are, to blow the whistle. Mitigation has a wider meaning than control; it
indicates that (i) there is a trade-off between risk and return and an optimal balance should be
sought, and (ii) there are ways to manage market risks actively that need to be investigated.

How each of these tasks should be carried out by market risk managers and what specific
problems they may encounter depend upon the business at hand. It would be ineffectual to give
general answers to these questions. Rather, we shall explore three business types and show how
the three tasks of market risk identification, assessment and control/mitigation vary between
them.7 But first a few comments about how the risk management function should be organised

III.A.1.4 The Organisation of Market Risk Management


Banking regulators have put forward a few general recommendations for the organisation of the
risk management function. They reflect a general consensus in the banking industry and are
probably valid as well for many other businesses.8

(i) The risk management function should be part of a risk management framework and
policies agreed by the board of directors. The board and senior managers should be
actively involved in its oversight.
(ii) The risk management function should operate independently of the risk/profit-
generating units; in particular, it should have its own independent sources of
information and means of analysis. The risk management function should be given
sufficient resources to carry out its tasks with integrity.
(iii) The risk management function should produce regular reports of exposures and
risks to line management, senior management and the board of directors. Non-

7We leave aside in this chapter the more routine and easily understood ‘monitoring’ task.
8The following four bullet points are not direct quotes but a summary by the author of recommendations that have
appeared in various BCBS publications.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 5
The PRM Handbook – III.A.1 Market Risk Management

compliance with the risk management policies should be communicated


immediately.
(iv) The risk management process should be well documented and audited at regular
time intervals by both internal and external auditors.

It is clear from these recommendations that the risk management function should be separate
from and independent of risk-taking line management functions in the front office and support
functions in the back office but should be in close communication with them. This is why most
banks locate the risk management function in a separate ‘middle office’. The middle office must
receive information on exposures from the front office in a timely fashion, but it should use its
own independent information sources for prices and derived parameters such as volatilities and
correlations, and it should use its own models to assess and forecast market risks. The middle
office should produce regular (at least daily for banks) market risk reports for the front office and
for senior management, containing detailed and aggregate risk estimates and comparisons against
limits. These market risk reports are usually combined with credit exposure reports and profit
attribution analyses where exceptional gains or losses as well as potential risks are explained.
These reports must be immediately verified and approved by designated front office and senior
managers. The middle office is also often responsible for producing statutory risk reports for
banking supervisors.

The risk management function in financial firms is also normally in charge of preparing market
risk management policies – to be submitted for the approval of the board – and of designing,
assessing and recommending hedging strategies. It may also be required to calculate provisions
and deferred earnings, to design and implement stress test scenarios, to establish controls and
procedures for new products and to verify valuation methodologies and models used by traders.
In a few instances, market risk managers may be asked to implement global hedging policies that
would not sit naturally within any existing department. More recently, market risk managers have
been asked to contribute to the analysis of the optimal level and structure of their firm’s capital
(as compared to the evaluation of minimum regulatory capital) and to risk budgeting (also called
‘economic capital allocation’) with the aim of improving risk-adjusted return on capital.

Proper resources, independence and lines of communication to senior management and


ultimately to an executive director on the main board are crucial for the integrity, credibility and
efficiency of the risk management function in financial institutions that aim to derive a profit
from taking market risks. Supporting and empowering the risk management function are lesser
problems in firms that do not seek market risks but would rather avoid them. However, unless

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 6
The PRM Handbook – III.A.1 Market Risk Management

there is a clearly defined and adequately supported market risk management function in these
firms, market risks may not be properly appreciated and managed.

III.A.1.5 Market Risk Management in Fund Management


III.A.1.5.1 Market Risk in Fund Management
The core activity of fund management is to take market risks9 with the expectation of generating
adequate returns.10 In most countries traditional funds are subject to strict regulations (about the
type of securities in which they can invest, disclosure requirements, etc.) that are aimed at
protecting investors. That is particularly so for pension funds that should provide long-term
security to their members and, in return, enjoy certain tax advantages. Fund managers also often
choose to limit and specialise themselves further according to market sectors or investment
strategies, leaving to investors the choice to allocate their savings among funds and manage their
own portfolio diversification; for example, specialist funds may describe themselves as ‘UK
equity’, ‘capital guarantee’ or ‘index tracker’ funds.

But specialisation and constraints can only limit potential returns so, starting in the 1970s, private
investment pools were created to offer to wealthy individuals, who might be more willing to take
risks, the promise of greater returns by avoiding traditional fund constraints and regulations – for
example, they can use short sales, derivative products and leverage. Interestingly, these new,
unfettered funds became known as ‘hedge funds’ because many of them used trading strategies
based on spreads between long and short positions rather than pure directional bets. Their
popularity grew in the mid-1990s and even more so when the technology bubble burst and
traditional funds’ returns inevitably tumbled. Total assets under hedge fund management may
soon pass the trillion dollar mark.11

Funds take market risks for the potential benefit of their investors. Fund managers themselves
are only indirectly affected by market losses. Their income is usually a set percentage of the value
of assets under management (plus some participation in profits in the case of hedge funds); it is
not directly affected by losses. The market risks are born by the investors. But the reputation of
fund managers and therefore their ability to retain existing investors and to attract new ones
depends on their ability to manage their risks (and their clients). It is therefore crucial that (i)
fund managers explain to their clients the risks they are taking, (ii) clients agree formally the terms

9 Funds also take credit and other risks but, bar special cases (e.g. a fund investing in a few high-yield corporate bonds
or a certain emerging market), most funds invest in a large number of liquid, good-quality securities and therefore
credit risks are less important than market risks.
10 In some cases returns must be sufficient to meet certain liabilities (e.g. pensions); in others, the fund managers’

objective will be to maximise some risk-adjusted performance measure.


11 By comparison, in mid-2004, total worldwide bond and equity markets were valued at about $70 trillion; more than

half these assets were managed by institutions. In the USA, mutual funds alone managed about $7 trillion of assets.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 7
The PRM Handbook – III.A.1 Market Risk Management

and conditions of their investment, and (iii) fund managers keep to the terms that have been
agreed.

III.A.1.5.2 Identification
It should be an easy task for fund managers to identify market risks because they normally have
chosen deliberately to take those risks. Nonetheless some risks may be overlooked, especially in
funds following sophisticated strategies. In general, funds following spread or arbitrage type
strategies will have reduced primary directional risks but will have increased exposures to
secondary risks. For example, if a fund is allowed to short securities, the uncertainty in the repo
cost incurred over the long term may be quite considerable, as well as difficult to estimate.
Likewise, the spread between two similar securities, say A and B shares of a company, may be
drastically affected by legal, tax or regulatory changes. It would not be much consolation to
decide that such events are operational rather than market risks if they have not been foreseen.

Liquidity risk is another relevant concern, particularly for hedge funds.12 Some assets may not be
bought or sold at the anticipated price because the transaction is too large compared to the
market appetite. Traditional funds are bound by regulations to hold only highly liquid positions.
Hedge funds, on the other hand, do not have such constraints and may end up holding relatively
large positions in specific securities. If in addition they are highly leveraged, they can easily be
thrown into a momentary cash-flow squeeze or even a terminal problem by a liquidity crisis. We
have already referred to LTCM as an example.

When funds have to meet specific liabilities – and many do13 – managers seek to maintain a stable
surplus of assets over liabilities and should therefore be concerned by possible market risks on
the liability as well as on the asset side. Actuarial practices and accounting standards have
generally overlooked or hidden these risks in the past but new rules are now coming into effect
that bring them to the fore. For example, in the United Kingdom, Financial Reporting Standard
17 (FRS 17) prescribes14 that asset and liabilities in company pension schemes be immediately

12 Liquidity risks deserve to be analysed separately from market risks, but the two are closely related. The liquidity of a
security can be characterised by its average daily trading volume. Usually, the bid–offer spread increases rapidly with
the size of a transaction relative to the average daily trading volume when that fraction is significant. Exceptionally,
there are securities that do not trade regularly and yet can be traded in large single blocks without putting undue
pressure on their price. This characteristic is commonly referred to as market ‘depth’.
13 For example, funds supporting defined benefit pension schemes, insurance policies (life or property and casualty) or

backing the issuance of guaranteed investment contracts (GIC).


14 FRS 17 becomes mandatory in the U.K. at the same time as the new International Accounting Standards (IAS)

become mandatory for companies listed on European Union stock exchanges, that is for accounting periods ending on
or after January 1, 2005. FRS 17 stipulates immediate recognition of gains and losses on a company pension scheme
but in a secondary statement of gains and losses called ‘Total Recognised Gains and Losses’ rather than in the Profit
and Loss account. IAS 19, the international standard relative to employee benefits has moved in the direction of FRS
17. Financial Accounting Statement No. 87 (FAS 87), the equivalent statement under U.S. generally accepted
accounting principles (U.S. GAAP) issued in 1985 lags behind IAS 19 and FRS 17 both in the application of fair
valuation and the rapid recognition of gains and losses.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 8
The PRM Handbook – III.A.1 Market Risk Management

recognised on the company balance sheet at their market value for assets or present value based
on relevant gilt rates for liabilities.

III.A.1.5.3 Assessment
The assessment of market risk is now a very well-developed activity in the fund management
industry. It has become part and parcel of performance assessment and, if not always done
thoroughly ex ante by fund managers, it is certainly done ex post by a number of analysts in order
to compare the so-called risk-adjusted performance of funds.

Ex-post analyses are usually pure statistical analyses of time series of returns. They produce
estimates of return distributions. These estimates are then fed into risk-adjusted performance
measures (RAPMs), the most common RAPM being the Sharpe ratio or ratio of expected excess
return relative to the risk-free interest rate divided by the standard deviation of return, both on an
annualised basis. There are some simple arguments why investors should prefer funds with the
highest Sharpe ratios. However, Sharpe ratios may lead to unwarranted conclusions if applied to
the comparison of funds with significantly different return distributions. For example, whereas a
well-diversified traditional fund holding long security positions only may be expected to exhibit
approximately log-normally distributed returns, a fund selling out-of-the-money options or
implementing a dynamic strategy with similar consequences should exhibit a significant
downward skewness and excess kurtosis of long-term returns. The Sharpe ratio would be
inadequate to compare the performance of these two funds, but a generalised Sharpe ratio or
some other RAPM accounting for skewness and kurtosis might do. See Chapter I.A.1 for a full
discussion of RAPMs.

Because many traditional funds are limited in their choice of securities and/or investment
strategies, they prefer to be judged on relative rather than absolute performance.15 Depending on
their strategy they choose or create a benchmark and estimate their risk-adjusted performance
relative to the benchmark. The standard deviation of returns relative to the benchmark is called
the tracking error. The choice RAPM is the ratio of the average excess return relative to the
benchmark over the tracking error; it is called the ‘information ratio’ or ‘appraisal ratio’. Often
the analysis of performance is extended to a full performance attribution analysis to explain
which strategies and which changes in market factors have contributed to profits and losses.

Ex-post assessments are certainly useful for comparison and analysis of returns as well as to
check ex-ante assessments, but they do not replace the need for ex-ante assessments. Ex-post

15 Although, as Warren Buffet said, ‘You cannot eat a relative performance sandwich’, comparison to peers or to a

chosen benchmark rather than absolute performance is seen as a clear indicator of skills and a powerful source of
motivation.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 9
The PRM Handbook – III.A.1 Market Risk Management

assessments lack reliability and relevance because they are based on limited information –
typically, a few years of monthly returns – and they are not forward-looking. Even if return data
were available on a much more frequent basis, daily for instance, they could only lead to a more
statistically accurate forecast of short-term returns. Methods such as exponential moving
averages (EWMA) and GARCH (see Section III.A.3.4) have proved useful to estimate daily risks
in financial markets exhibiting time-varying volatilities. But estimates of short-term volatilities
have little relevance for long-term risks when these are governed by a specific investment strategy
such as capital protection, and consequently short-term returns are not mutually independent.

It is only on the basis of ex-ante assessments of risks that fund managers can check and justify
that they are adhering to their management mandate as described in a mutual fund prospectus or
agreed with trustees or shareholders. Of course, ex-ante assessments are more difficult. One
must rely on assumptions about future market behaviour and sometimes introduce a degree of
subjectivity. One must also assume a trading strategy complete with limits and contingency
plans. Too often ex-ante analyses are carried out using standard commercial models without
sufficient questioning of the assumptions contained in these models. A typical error, for
example, is to assume that future departures from the performance of a benchmark will be small
if the tracking error has been small in the past. The problem is that, ex post, the tracking error is
usually calculated on de-trended return series, but if the composition of the portfolio being
evaluated is significantly different from the composition of the benchmark, the two return series
may well have different trends.

III.A.1.5.4 Control/Mitigation
If star ratings from one to three were to mark the degree of difficulty of a task, the identification
of market risks in fund management should be attributed only one star, risk assessment two stars
and control/mitigation three stars. Control of limits is not much of a problem, but the design of
a risk mitigation strategy is as complex as the design of the investment strategy itself. In fact, the
two cannot be separated except in a few special circumstances.

III.A.1.5.4.1 Selective Hedging


As a first special case, some undesirable risks may have been acquired as part of a package and
need to be reduced. A typical example is that of a fund investing in a particular industry sector
worldwide but wishing to maintain currency exposures to a minimum. In this case forward
currency contracts can be used as hedges, but these hedges will have to be readjusted as a
function of changes in foreign-denominated asset values and rolled over regularly. The
corresponding costs and residual uncertainties will need to be estimated.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 10
The PRM Handbook – III.A.1 Market Risk Management

A more complex case is that of positioning a bond portfolio to take advantage of some interest-
rate changes whilst protecting the portfolio against other possible interest-rate changes. Active
managers of bond portfolios seek to exploit specific views on interest rates, for example, that an
interest-rate term structure will flatten or that rates in two currencies will converge. At the same
time they are likely to want to reduce exposures to interest-rate movements that should not affect
their strategies, for example, for the two strategies above, parallel shifts of all interest rates. A
traditional method to achieve this is to calculate the first- and second-order derivatives of bond
values with respect to their yield to maturity (see Section I.B.2.6). These derivatives or
‘sensitivities’ are called ‘value duration’ and ‘value convexity’ respectively.16 Assuming the same
changes in yields across all bonds in a portfolio, the portfolio value duration and value convexity
are simply obtained by adding up the individual bond value durations and value convexities.
Adjusting the composition of a bond portfolio so that these two sensitivities become negligible is
often interpreted as ‘immunising’ the value of the portfolio against parallel shifts in interest rates.
An example of immunisation against a parallel shift of bond yields, hedging one bond with
another, is given in Section I.B.2.7. But note that reducing the value duration and value
convexity of a portfolio to zero does not eliminate all interest-rate risks. It is an efficient
immunisation only against small parallel shifts in bond yields, which is a relatively unlikely
scenario;17 many other variations of interest rates are possible.

A sensible approach to selecting a portfolio of bonds to be immune to some movements in


interest rates whilst maximising the profit opportunity from a forecast movement is first to
calculate each bond price variation relative to each relevant interest-rate movement and then to
choose the portfolio weights so as to maximise the portfolio gain for the forecast interest-rate
movement whilst leaving the portfolio value unchanged for the other movements.

III.A.1.5.4.2 Momentary Hedging


As a second special case, we have the momentarily undesirable risks. At times, fund managers
may fear a correction in the markets that would harm their performance or may simply wish to
reduce some exposures for peace of mind because they are momentarily absorbed by other tasks.

16 In the same way as we now say ‘value-at-risk’ rather than ‘dollar-at-risk’, it is time to say ‘value duration’ rather than

‘$duration’. When minus the value duration is divided by the value of the bond, the result is called the ‘modified
duration’. Finally, when the bond yield is expressed on an annual basis, ‘duration’ (or ‘Macaulay duration’) is defined as
‘modified duration’ multiplied by (1 + yield). Historically, Macaulay was the first author to introduce the concept of
duration; he chose the name because, for a zero-coupon bond, it is equal to the maturity of the bond, and, for a
coupon bond, it is equal to the average maturity of the cash flows weighted by their corresponding discount factors
calculated at the bond yield. The second-order sensitivity relative of a bond value with respect to its yield is called
‘value convexity’ because it relates to the curvature of the value versus yield curve.
17 A parallel shift in bond yields corresponds approximately to a parallel shift in the zero-coupon rate curve. Actual

movements of the zero-coupon rate curve are best captured by a principal component analysis (see Section III.A.3.7).
The first principal component, which usually explains 75% to 80% of interest rates’ total variance, is frequently
described as a parallel shift when in fact it is often anything but parallel. Medium-term rates (18 months to 3 years) are
often more volatile than both short-term and long-term interest rates (see, for example, Alexander, 2001, Table 6.2b, p.
149).

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 11
The PRM Handbook – III.A.1 Market Risk Management

Closing down some positions for a short while may prove difficult or expensive; an overlay
hedge may be less costly and still efficient. Adding offsetting derivative positions does it. Even
if the derivatives are not a perfect offset for the undesirable exposures, an approximate global
hedge can be achieved. It may work particularly well during brief market crises when correlations
between related market factors tend to increase. There are many examples throughout the
Handbook of such strategies. For example, the use of call options on bond futures to hedge a
bond portfolio is given in Example I.C.6.6. The use of equity swaps is explained in Section
I.B.4.7.1.1.

III.A.1.5.4.3 Managing for a Risk-Adjusted Performance Target


Coming back to the general case, risk mitigation strategies are inseparable from investment
strategies designed to achieve some risk-adjusted performance target. There is but one task for
the active fund manager: to follow a policy – encompassing level of diversification, leverage,
selection of securities, etc. – compatible with the objective of investors and his own forecasts.
Like customers in a supermarket who want choice and want to know what they buy by reading
the labels on the cans, investors want a description of the funds offered to them in terms of
composition of assets, strategies, objectives and target risk levels. And fund managers must
remain true to the description of their products. The type of assets in which a fund can invest is
certainly a major determinant of the volatility of returns. For example, equities are generally
regarded as more volatile than bonds; within equities some sectors such as dotcoms and
emerging markets are clearly more volatile than, say, utilities in G7 countries, and so on. But
other factors have also a large influence on risk; chief among them are the level of diversification
and the degree of gearing of the risky assets. By increasing the number of relatively independent
securities in a portfolio, diversification reduces total risk by averaging out the effect of specific,
independent risks. Diversification can be optimised to obtain the best possible value of the
chosen risk-adjusted performance target – Sharpe ratio, information ratio or other (see Section
I.A.3.4 for details). Gearing up, or leveraging, means increasing the allocation of funds to a risky
asset class relative to a risk-free asset class, usually cash deposits, or even borrowing in order to
invest more in the risky assets than the equity value of the fund. The expected return above the
risk-free rate and the volatility of return vary proportionally to the amount invested in the risky
assets relative to the equity value of the fund. Therefore gearing up or down does not affect the
Sharpe ratio of a fund (see Section I.A.3.5 for details), but it can be used to adjust the risk level to
suit a specific group of investors. Of course, many investors are sophisticated enough to manage
themselves their own gearing and diversification. All they want is to have a choice among a wide
variety of well-defined funds.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 12
The PRM Handbook – III.A.1 Market Risk Management

This leaves fund managers with the choice of dynamic investment strategies as a means of
controlling risk and return. Positions can be actively adjusted either with the objective of
achieving a specific return distribution – we discuss a couple of examples in the following
subsection – or simply to take advantage of evolving return forecasts. The latter is particularly
difficult to optimise as the cumulative costs of rebalancing more frequently compared to the
expected opportunity losses of rebalancing less frequently are difficult to perceive intuitively and
to analyse quantitatively. This is a subject of academic interest (see Davis and Norman, 1990),
but implementation of systematic dynamic strategies is lagging behind theory. Most active fund
managers rely on heuristics to rebalance their portfolios. These are rules of thumb based on trial
and error combining profit objectives with multiple limits: stop losses, delta limits, limits on
turnover, etc. Only a small proportion of active fund managers – essentially those using highly
quantitative investment strategies (e.g. cointegration or convertible arbitrage) or those promising
protected returns – rely on systematic rebalancing rules.

III.A.1.5.4.4 Capital Protection


Since the early 1980s, there has been a growing number of funds offering some kind of
performance protection in order to attract risk-averse investors. They try to offer the best of two
worlds: on the upside a participation in the potentially high returns of a risky asset class –
typically equities or commodities – or, as a minimum on the downside, a capital guarantee or the
return on a low-risk investment – for example, a deposit or a bond.

With few exceptions capital guaranteed products have a stated maturity of a few years. Investors
staying until maturity are guaranteed to receive a defined performance. For instance, I read the
following offer received today: after 5 years you will be guaranteed 105% of the performance of
the FTSE 100 index on your initial investment or your money back, whichever is the highest.
The guarantee is from the sponsor of the product or a third party, usually a good-quality
insurance company or bank. To add to the popularity of these products and attract small savers
to long-term equity investments, tax advantages may also be available at maturity. On the other
hand, early withdrawals are not guaranteed or are guaranteed at only a fraction of the initial
investment.

Although these products may appear as manna from heaven to the unsophisticated investor, they
are actually simple to manufacture. In our example, the sponsor could use the initial investment
to buy the equivalent face value of a five-year zero-coupon bond and a FTSE 100, at-the-money,
five-year over-the-counter (OTC) call option on 105% of the initial investment from a specialist
bank.18 The zero-coupon bond might cost 75% and the call option 18%, leaving 7% to the

18 Always read the small print on the exact definition of the pay-off; it is often not as straightforward as it first appears.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 13
The PRM Handbook – III.A.1 Market Risk Management

sponsor to cover his expenses and contribute to profit. We leave to Section III.A.1.6.4 the
manufacturing of the call option and, more generally, the dynamic hedging of option portfolios.

The main drawback of capital guaranteed investments is their bullet form: one fixed size, fixed
maturity issue. Investors like the flexibility of open-ended funds whose shares can be issued or
redeemed at any time at their net asset value plus or minus a small commission. Can any form of
downside protection be offered on an open-ended fund? This question has exercised the minds
of many financial engineers and only approximate answers have been found. In the early 1980s,
many funds became interested in the concept of portfolio insurance and tried to implement it by
themselves or with the help of consultants. Insurance of an equity portfolio consisted of
overlaying short positions in the new equity index futures at critical times. A naïve strategy, for
example, would be to short futures whenever the market index fell below a predefined level and
to buy them back whenever the market index recovered above that level. This, in effect, is a very
inefficient attempt at replicating a put option; it creates significant residual risks and costs, not to
mention implementation difficulties (e.g. index jumps, lack of liquidity). Not surprisingly, during
the crash of October 1987, this type of portfolio insurance disappointed and various dynamic
portfolio strategies came under criticism (Dybvig, 1988).

Portfolio insurance strategies were improved (Black and Jones, 1987) and new concepts emerged,
notably, that of constant proportional portfolio insurance (CPPI) (Black and Perold, 1992).
Under CPPI a fund would maintain an exposure in a risky asset proportional to the net asset
value of the fund above a certain minimum. For example, the risky asset could be an equity
index future and the exposure would be 200% of the excess value of the fund above the value of
90% of the initial investment placed in a short-term money market. Thus the fund manager
would promise (sometimes with a bank guarantee) as a minimum the money market return on
90% of the initial investment but would raise the expectation of an equity index performance
with a leverage of up to 200%. In continuous markets and with frequent (weekly or daily)
rebalancing, CPPI would be safe. In practice, negative jumps combined with a poor initial
performance of the risky asset may bring the fund value rapidly to its minimum guaranteed level,
at which point the fund becomes a pure money market fund. This is what has happened in the
early 2000s to many CPPI funds that were launched at the end of the 1990s.

Special types of portfolio insurance strategies should be developed for funds that must meet
specific liabilities. Future liabilities may be uncertain both in amounts and timing (e.g. insurance
claims), nonetheless, fund managers must aim for a stable surplus of assets over liabilities. These
issues are often obscured by ad hoc actuarial rules and regulations and are in great need of re-
examination.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 14
The PRM Handbook – III.A.1 Market Risk Management

III.A.1.5.4.5 Compliance and Accountability


Investors are becoming increasingly sophisticated and capable of scrutinising the market practices
and performance of fund managers. Beyond compliance with regulations and accounting
standards, beyond the avoidance of market malpractice,19 fund managers must satisfy investors
that they are following strategies compatible with stated performance objectives and risk limits.
When results have been disappointing, investors have sued fund managers for negligence or non-
compliance with agreed policies. The investor does not necessarily have to lose money for this
approach to succeed. In a case that has set new standards of accountability for fund managers,
the Chief Investment Officer of the Unilever Superannuation Fund (a pension fund) accused the
Mercury Asset Management unit of Merrill Lynch Investment Managers of negligence after the
fund had underperformed its benchmark by more than 10% over little more than a year (January
1997 to March 1998). Mercury had agreed a target of 1% per year above the benchmark return
with a tracking error of no more than 3%. Although the return on the £1 billion fund had still
been positive over the period, Unilever sought damages of £130 million. The case revolved
about the improper use of risk assessment models and, crucially, about the delegation of day-to-
day operations to a ‘junior’ investment officer. Merrill admitted no liability but, in June 2001,
settled out of court for a substantial amount. Similar cases have followed since.

III.A.1.6 Market Risk Management in Banking


III.A.1.6.1 Market Risk in Banking
Banks, like hedge funds, take geared positions on various asset classes. They differ in that many
of their assets (e.g. loans) are illiquid and some of their sources of funds (e.g. deposits on call) are
low-cost but with an indeterminate term outside the banks’ control. In addition, banks are
engaged in a number of fee-earning activities, but that is not of primary interest as far as market
risks are concerned.

Most market risks are taken by banks voluntarily with a view to benefiting from the exposures.
At the same time, deposit taking from clients is based on trust and it is crucial for banks to
maintain their reputation of financial stability and competence in managing risks. Otherwise,
clients may slip away, causing funding to become rapidly more expensive, and the business may
fall into a downward spiral.

But banks are generally well equipped to manage market risks. It is part of their core
competences; they have powerful systems to analyse and monitor risks and good access to the

19 The fund management industry has recently been the subject of a series of enquiries about conflicts of interests due
to close relationships between investment bankers and fund managers and about market malpractices such as ‘market
timing’ which have resulted in hundreds of millions of dollars of fines, withdrawal of billions of dollars of funds, scores
of firings, and the reorganisation of several financial conglomerates.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 15
The PRM Handbook – III.A.1 Market Risk Management

markets for hedging. Moreover, they operate under the close supervision of banking regulators.

III.A.1.6.2 Identification
As mentioned earlier (see Section III.A.1.2), a key distinction is made between liquid assets
eligible for capital treatment under trading book regulations and less liquid assets or assets held
with a long-term intent that are relegated to the banking book. Only assets in the trading book
are subject to detailed statutory assessments and corresponding capital charges. By contrast,
market risks in the banking book are largely ignored by banking supervisors, and so are market
risks affecting liabilities. In fact, accrual accounting standards tend to hide such risks. Only if
common sense indicates that unusually large market risks are present in the banking book will
banking supervisors request some ad hoc estimates and monitoring/control procedures and be
free to impose additional capital charges.20

Within the trading book, market risks are traditionally categorised by main markets; so, interest-
rate, equity, currency and commodity risks are often identified separately. Nonetheless many
positions entail risks in several markets, for example, a share denominated in a foreign currency, a
convertible bond, a commodity linked loan; such positions will have to be identified under each
of the corresponding market risk types.

To facilitate market risk analyses, it is also traditional to distinguish primary from secondary risks
and general market risks from specific risks. The primary risks are the directional risks resulting
from taking a net long or short position in a given class of securities or commodities. The
secondary risks are the other risks, deemed a priori to be less important, for example, volatility
risk, risk on spread between a security and a futures contract on that security, a dividend risk, a
risk on repo costs. Secondary risks may well appear less important than primary risks, except for
the fact that many trading books are managed actively with the purpose of reducing primary risks
to negligible proportions at the expense of an increase in secondary risks.

Likewise, general market risks, such as an exposure to movements of an equity index, may seem
more important a priori than specific risks due to unequal variations of prices of shares in that
index. However, the composition of a portfolio of shares may differ markedly and systematically
from their weightings in a reference index, for instance because of lack of diversification or
because of the implementation of a particular investment strategy (say, value strategy rather than
growth strategy). Consequently, specific risks may be large compared to the general (also called
‘systematic’) market risk and should not be overlooked.

20 Capital surcharges are left to the discretion of banking supervisors under Pillar II of the Basel Accord. See BCBS

(2004b) for general principles on the management of interest-rate risks.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 16
The PRM Handbook – III.A.1 Market Risk Management

Finally, one should take into account the increasing proportion of option and option-like
instruments in trading books. A financial option is an instrument offering the right but not the
obligation for the owner to make a claim by a certain date if some underlying market factor (or
combination of market factors) is favourable or, otherwise, to forgo any claim and gain nothing.
Thus the pay-off of an option is a non-linear function of some market factors. By extension,
instruments that yield a non-linear pay-off in some market factor(s) can be called option-like –
for example, a bond price is a non-linear function of changes in the discount-rate curve. Because
of this non-linearity, the fair value of an option or an option-like instrument depends on the full
probability distribution of future values of the underlying market factor(s) and not only on their
current or expected future values. Long-term volatilities and correlations affecting the value of
options offer new trading opportunities and hence new market risks.

A systematic identification of market risks in the trading book should proceed through the
identification/selection of key market factors and the construction of models relating the value of
instruments in the trading book to these factors, a step that will be essential for the assessment of
market risks.

Within the banking book, market risks are harder to identify because positions are generally not
valued at fair prices and therefore fair price variations are of little concern. But it is common
sense that most banks are exposed to large market risks in their banking books assets as well as
on their liabilities. In fact, interest-rate maturity transformation, the short-term funding of
longer-term loans, has been a traditional banking strategy since time immemorial and entails an
exposure to interest-rate rises. The impact of interest-rate changes is also enhanced by the
clients’ behaviour: if there are any prepayment or extension possibilities on loans and deposits,
clients will take maximum advantage of these options to make loans cheaper or deposits more
attractive and therefore less profitable for the bank.

Some option-like positions in the banking book are particularly susceptible to interest-rate
changes. For example, a line of credit at a predetermined spread above Libor is economically
equivalent to an option on the credit spread of the client: the line is much more likely to be
drawn upon when the creditworthiness of the client has declined than when it has been
maintained.

III.A.1.6.3 Assessment
We stressed the multiplicity of market risk factors in the preceding section on identification: there
are systematic and specific market risks; primary directional risks and secondary risks; volatilities
and correlations. The assessment of market risks is based on the selection of a limited number of

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 17
The PRM Handbook – III.A.1 Market Risk Management

market risk factors and a choice of models to describe uncertainties in the future values of these
factors, their impact on the value of individual instruments and, consequently, on the values of
portfolios.

Fortunately, a number of models have been put forward and tested over the last 30 years or so.
They fall into three main categories: (i) probabilistic/statistical models describing uncertainties
about the future values of market factors; (ii) pricing models relating the prices and sensitivities
of instruments to underlying market factors; and (iii) risk aggregation models evaluating the
corresponding uncertainties on the future values of portfolios of financial instruments. In the
first category are the stochastic processes commonly used to describe the evolution of market
factors: geometric Brownian motion, stochastic volatility models, GARCH models, etc. A prime
example of the second type is the Black–Scholes; option pricing model (see Section I.A.8.7)
which, for a given choice of dynamics for the underlying asset price, adds some efficient market
assumptions and a hedging argument to yield a risk-free option price. The third type is
exemplified by value-at-risk (VaR) models which, with the help of a few simplifying assumptions,
produce a probability distribution (or at least some statistics) on the future value of a static
portfolio at a chosen future time. These are explored in Chapters III.A.2 and III.A.3.

We should acknowledge immediately that, no matter how sophisticated mathematical models


have become, they are idealisations of reality. They are bound to be approximate at best; at worst
they can be misleading. Nonetheless, they are indispensable.

There is a large degree of subjectivity in the choice of a model. A balance must be struck
between realism and tractability. Depending on the business at hand, different models may suit.
Conversely, a particular choice of model is always vulnerable to ‘gaming’ by traders seeking to
construct portfolios that will apparently exhibit little risk. That is why banking supervisors want
to exercise some control over the use of models for regulatory risk reporting – they want to
examine the way a model is used, by whom and for what purpose before ‘recognising’ its use for
regulatory reporting and the determination of capital ratios.

The greatest degree of freedom in the choice of models lies at the very first stage in the choice of
market factors and the description of their dynamics. For example, to describe interest-rate risks
across portfolios of bonds, bond derivatives, swaps and other interest-rate derivatives, one starts
with a choice of interest-rate term structure model. It can be one of many single- or multiple-
factor models from which all interest rates are derived. The temptation on the front desk is to
choose the simplest adequate model for each task at hand and therefore to use different models
for different instruments. But a market risk manager should be concerned with

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 18
The PRM Handbook – III.A.1 Market Risk Management

comprehensiveness and coherence across models so that the effects of a variety of possible
interest-rate fluctuations are taken into account realistically and consistently.

Pricing models are indispensable for all securities that are not readily priced in the market, for
example, most OTC derivative and structured products. They are also necessary for most
securities whose prices are readily available in the market because, unless these prices are selected
as market factors, we need to know how they would be affected by changes in the value of the
selected market factors. For example, we need to know how the prices of bonds would be
affected by some fluctuations in the risk-free interest-rate curve, such fluctuations being relative
to the selected market factors. Pricing models should follow logically from the choice of
dynamics in the underlying market factors; however, some simplifications/approximations are
usually introduced to obtain realistic prices within a limited computation time.

Likewise, risk aggregation models should follow logically from the choice of market factors
dynamics and pricing models. However, many further simplifications are introduced for two
reasons:
(i) Dependencies between market factors can be very complex; they may differ between
normal and extreme market conditions, between short- and long-term horizons.
(ii) Trading book portfolios are by definition very dynamic, but it would be complex to
describe the effects of new business and dynamic hedging strategies.

The conventional wisdom in banks is therefore to concentrate on the short term under normal
market conditions where dependencies may be approximated by linear correlations and portfolios
may be assumed to remain relatively static. This is what regulators ask banks to do: to estimate
the maximum level of market losses that would not be exceeded with a probability of more than
1% on a static portfolio over the following 10 trading days.21 This number is then back-tested
against actual conditions and scaled up to produce a minimum capital requirement for market
risks. Stress tests (explained in Chapter III.A.4) are then applied to ensure that the minimum
capital requirements are sufficiently safe. Market risk capital requirements for portfolios assessed
separately are then simply added together and added to capital requirements for other types of
risks to yields the total minimum regulatory capital (MRC). Note that this naïve aggregation
process is likely to produce a larger total MRC than necessary because it does not recognise the
effects of diversification among risks. Unfortunately, however, it is not necessarily safe.22 In
some cases, adding the VaRs may understate the gross risk.

21 This is what is commonly known as the VaR figure in banking.


22 The addition of VaR figures is not sub-additive in general. In particular, super-additivity may occur when the tail
risks are bigger than if they were normally distributed.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 19
The PRM Handbook – III.A.1 Market Risk Management

Many banks have now taken a wider view of market risks than that requested by banking
supervisors. Banks can choose parameters to suit their own internal purposes – whether to
improve resource allocation, to set up an ideal level of capitalisation or to test strategic plans.
They can choose their own time horizon for risk assessment and confidence level for extreme
losses. They can incorporate less liquid instruments into the banking book, assume some initial
pricing uncertainties and take into account the effects of trading and hedging strategies. In the
banking book, they may develop models of customer behaviour in response to changes in
interest rates and other market factors.

III.A.1.6.4 Control/Mitigation
Banks have the means and the competence to manage most market risks very effectively. First,
they have some degree of control over the market risk they take. In the medium term they can
shape the risks by modifying the design and pricing of the products they offer to their customers.
They can also adjust their liabilities to a large extent to match the risk profiles of their assets. In
the short term they can use derivative products to hedge most market risks if they wish to do so.

The importance of financial derivatives as market risk hedging instruments needs to be stressed.
Financial derivatives have become a huge market. In terms of notional size of the underlying
assets, they are twice as large as bond and equity markets combined, about $140 trillion against
$70 trillion.23 In terms of trading volumes they are even larger. However, the total market value
of these instruments – counting the positive side only of each transaction – is less than $3 trillion
and, after netting exposures to single counterparties, less than $2 trillion. The credit risk created
by derivative products is therefore very small, about 40 times smaller than the credit risk created
by bonds and equities. Derivatives are also relatively cheap to trade. As a fraction of notional
size, the sum of bid–offer spread and commissions on derivatives is typically at least ten times
cheaper than for the relevant underlying assets. Derivatives also exist on assets that would not
be easy to trade, such as equity indices and notional bonds. These features make derivatives the
choice instruments for hedging market risks,24 and indeed banks are usually found on at least one
side of most OTC derivative products and as active participants in listed derivatives markets.

The crucial element to set up a market risk control/mitigation strategy in a bank is the definition
of the objective. As for fund managers and even more so, there are some undesirable market
risks accumulated in the course of normal business; from time to time there may also be risks
that should clearly be reduced because of changes in market or management circumstances. But

23 Of the $140 trillion total, about $110 trillion is OTC and $30 trillion listed; about 60% of financial derivatives are in

the form of interest-rate swaps.


24 Note that there are also financial derivatives to cover credit risks. The market for credit default swaps and other

credit derivatives has grown at about 50% per year over the last 10 years and now covers about $2.5 trillion of
underlying assets.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 20
The PRM Handbook – III.A.1 Market Risk Management

the bulk of market risks taken by banks is still taken willingly with the objective of deriving a
profit. The risk management objective must therefore be the optimisation of some risk-adjusted
performance measure within the constraints on minimum regulatory capital and various
concentration limits imposed by banking supervisors or adopted internally. This general
objective is translated in the short term and at various hierarchical levels (division/desk/trader)
into simpler objectives and limits. The simpler objectives usually also take the form of risk-
adjusted performance measures, but with a cost of risk (or cost of risk capital) adapted to each
management unit.25

Having assessed market risks, recognised the tools that can be used for their control and defined
the objective of market risk management, the design and implementation of a control/mitigation
strategy should follow naturally. In reality, there are still some complexities due to people and
organisations. First, individual incentives should be aligned with the stated objectives. Rewards
cannot be based solely on results without considering and agreeing ex ante the risks being taken.
When a market risk hedge is put in place, there is roughly one chance in two that the hedge will
generate a loss. All too often, if the rationale for the hedge has not been clearly agreed at the
start, a loss on the hedge will reflect badly on the hedger, especially if the risks being covered are
risks that the bank used to accept in the past. Second, risk mitigation is achieved more
economically at a macro than a micro level. The following case illustrates these two points.

In the late 1970s, the European markets became flooded with petro-dollars. International
treasury divisions of banks grew rapidly to handle this ‘hot’ money that could flow in and out
rapidly. Many international treasuries implemented a very cautious micro-hedging strategy: each
dollar deposit had to be matched with a corresponding lending and vice versa, thus doubling the
size of the balance sheet and losing a bid-–offer spread to the market. At the same time in the
same banks, domestic treasuries were continuing to run significant interest-rate gaps (longer
interest-rate maturity schedules on the asset side than on the liability side) without worrying
about it because they had always done so. Clearly, there was a lack of consistency between the
risk management objectives and mitigation strategies between the domestic and international
sides. It could be explained for a while by fear of the unknown on the international side. But
eventually a more balanced approach had to be implemented in which the interest-rate maturity
gap could be tracked on a net basis at regular intervals (e.g. daily) and managed globally. This
achieved today in most banks and, indeed, international and domestic treasuries are now often

25 Risks generated by various management units may have a different impact on global risk. Some may have a

diversifying or even hedging effect; others may be highly correlated with global risk. The cost of risk attributed to each
unit should therefore reflect the marginal contribution of each unit to global risk to ensure that local optimisations of
risk-adjusted performance lead to a global optimisation of risk-adjusted performance. Decomposition of risk is
explained more fully in Section III.A.3.6.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 21
The PRM Handbook – III.A.1 Market Risk Management

merged in a single treasury division implementing a coherent market risk management policy
across desks.

We do not have the space here to detail market risk hedging strategies, but we can highlight a
couple of points. First, managers are mostly concerned about reducing primary risks, that is, the
risks associated with net long or short positions in various asset classes. Exposures to primary
risks are characterised by first-order sensitivities, the ‘deltas’ to the corresponding market
factors.26 Delta is the word commonly used to describe the first-order sensitivity of or change in
an option value per unit of underlying asset relative to a very small change in the underlying asset
price, as explained in Section I.A.8.8. Primary risks can be covered (‘delta hedged’) relatively
cheaply with futures and forward contracts.

In many instances, however, the exposures are not linear in the hedging instruments because of
the presence of options or option-like instruments. Therefore, hedges with futures and forwards
must be rebalanced over time as prices fluctuate. How often should delta hedges be rebalanced?
As a rule of thumb, over a given time interval, the transaction costs of rebalancing a hedge (bid–
offer spreads and commissions) increase as the square root of the rebalancing frequency, whereas
the variance of residual risks decreases as the inverse of the rebalancing frequency. An optimum
frequency (or more efficient rules based on actual market movements) can be derived from a
choice of trade-off between costs and residual risk and the knowledge of some portfolio and
market characteristics (volatility of underlying asset price, gamma or second-order sensitivity of
the portfolio to changes in the underlying asset price, unit transaction costs); see, for example,
Hodges and Neuberger (1989). The results may surprise traders because it is very difficult for
anyone to gain an intuitive view about the right balance between expected costs and residual risks
that accumulate slowly over time but can reach very large figures and can be very different from
one portfolio to another.27

Second, having more or less delta-hedged a portfolio, one is left with secondary risks now playing
a primary role. Key among these risks are exposures to volatility changes and larger than
expected underlying asset price movements, also known as gamma risks from the name generally

26 When multiplied by the notional size of the underlying asset we obtain a so-called ‘dollar-delta’ or ‘delta-equivalent
value’ that is the value of a position on the underlying asset having the same sensitivity as the option. Historically,
other names have been used in other markets, for example ‘modified duration’ in the bond markets as we have seen in
Section III.A.1.5.4.
27 As a test, consider two portfolios A and B. A has twice the gamma of B (in dollar terms), twice the volatility and

twice the transaction costs per unit volume. How frequently should B be rebalanced relative to A? The answer is, on
average, four times more frequently than A. As a rule of thumb, the optimal rebalancing frequency is proportional to
(volatility)2 × (gamma/unit transaction cost)2/3 , which is not immediately obvious.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 22
The PRM Handbook – III.A.1 Market Risk Management

given to second-order sensitivities to market factors.28 The sensitivity of a portfolio to changes in


volatility (assuming volatility can be described by a single parameter) is usually referred to as
‘vega’.29 The two risks (explained in Section I.A.8.8) are related but are not the same. For a single
plain vanilla option and using a constant volatility model, vega is equal to minus gamma
multiplied by time to maturity. Obviously, for portfolios of options and option-like instruments
with different maturities, there is no longer a simple relationship between vega and gamma. If
the model for the dynamics of a market factor does not assume a constant volatility source of
risk, then volatility may change over time and space (market prices) and one can no longer speak
about a single vega, at least not without redefining what is meant by vega. Gamma risk is a
concern inasmuch as if large and left unchecked it would require a very active and therefore
expensive delta-hedging strategy and still leave the trader exposed to large residual risks in case of
sudden market movements. Vega risk is a concern inasmuch as, for many market factors,
volatilities may fluctuate rapidly (short-term volatilities may suddenly double or treble in a crisis)
and are difficult to predict. Both gamma and vega risks can be controlled with the use of
options. However, unless options for hedging can be found with maturities similar to the
original exposures, vega hedging will be very crude and almost impossible to combine with
gamma hedging. Hedging market risks remains therefore something of an art.

To summarise the degrees of difficulty in the identification, assessment and control/ mitigation
of market risks in banking, I am minded to give the maximum three-star rating to all three tasks.
At least this is my excuse for having given a longer description of these three tasks in the banking
section compared to the fund management and non-financial firms sections.

III.A.1.7 Market Risk Management in Non-financial Firms


III.A.1.7.1 Market Risk in Non-financial Firms
Non-financial firms, whether in service, trading or manufacturing industries, take on market risks
in the natural course of their business without seeking such risks to derive a profit. Their core
competences lie elsewhere and they would rather unload these risks on to market professionals or
hedge them directly in the markets. For example, in our global markets most manufacturers are
exposed to foreign currency fluctuations; they affect the cost of raw materials, the price at which
finished products can be sold in foreign markets as well as the price of competitive foreign
imports.

28 Because gamma is related to the curvature of the value of a portfolio as a function of a market factor, it is also

referred to as ‘convexity’. That is certainly the case in bond markets when describing the second-order sensitivity of a
bond price with respect to its yield.
29 American traders, having quickly run out of Greek letters, opted for a hot blue star and an easy alliteration (i.e. vega

and volatility).

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 23
The PRM Handbook – III.A.1 Market Risk Management

Company reports are full of comments about business being affected by the weakness of one
currency or the strength of another, by the cost of energy or raw materials, by the crippling
effects of an interest-rate increase and the difficulties in raising capital. These are common
market risks but they are often regarded by entrepreneurs as externalities about which they can
do little. In fact more and more can be done to reduce these risks or at least smooth out their
effects in the short to medium term. The real question is to what extent non-financial firms
should design and implement hedging programmes to reduce the impact of market risks. Do
such programmes add value to shareholders or are they simply contributing to the profits of
banks and other financial intermediaries?

There are few guidelines on best practice for market risk management in non-financial firms and
no regulations comparable to those in banking or fund management.

III.A.1.7.2 Identification
The identification of market risks in non-financial firms is arguably the most difficult of the three
risk management tasks, followed by assessment and then control/mitigation. Thus, three stars
for identification, two for assessment and only one for control/mitigation. Why? Because the
management of financial risks is by definition not among the core competences of non-financial
firms and therefore it tends to be neglected. It is a natural tendency that we tend to address the
problems we know how to solve and ignore the others. So market risks may be not properly
recognised, but if they were they might not be so difficult to evaluate and control.

In our modern, globalised and deregulated economies, market risks are very pervasive; they affect
firms either directly or indirectly through competition. The three main sources of market risks
are interest rates, foreign currency exchange rates and commodity prices. Equities tend to be the
exception, except for holding companies and other companies relying heavily on investments in
securities.

Finance directors sometimes ask: ‘What is less risky, borrowing at fixed rates or at floating rates?’
Here lies the paradox: on an accrual or cost-accounting basis, borrowing at fixed rates is safe, the
financing costs are fixed, whereas floating rates are risky; but on a fair accounting basis it is the
opposite, the present value of the floating rate debt is almost constant, whereas the present value
of the fixed rate debt varies with interest rates like the price of the equivalent bond.30 The choice

30 If a loan is evaluated at a fair price like a bond and future cash flows are discounted at, say, the going Libor rates, it is

easy to verify that the fair value of a properly priced floating rate note at Libor should be close to its face value at each
interest-rate payment date. There may be small fluctuations between interest-rate payment dates and the present value
of any credit spread, and profit margin above Libor will also fluctuate. The fair value of a fixed rate loan, on the other
hand, will fluctuate like the fair value of the equivalent coupon bond, going down when interest rates go up and vice
versa.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 24
The PRM Handbook – III.A.1 Market Risk Management

of accounting standards or, more generally, the choice of a coherent frame of reference for risk
evaluation is critical. In particular, mixing the use of several frames of references can only lead to
confusion. Difficult, subjective and inaccurate as it may be, I think that a fair valuation of assets
and liabilities is the only acceptable basis for recognising and assessing risks, even though for
other good reasons companies use accrual accounting extensively in their reports. We may need
two sets of accounting principles: one to report results objectively and accurately, the other to
serve as a rational basis for risk management.

But the answer to the previous question does not depend only on the choice of accounting
standards, it also depends on the business, in particular on the composition of assets and
liabilities. It is the uncertainty about the future equity value of the firm that is of concern to
shareholders. Thus, if the assets of the firm are perceived to generate returns independent of
future interest rates, a fixed rate funding may be the safer option, but if future returns are
perceived as being highly correlated with interest rates then a floating rate funding is the safer
option. In making these judgements, and considering the medium to long term, it may be helpful
to consider inflation indices as intermediate factors. Future inflation rates are uncertain, but the
operational profit margin of a business before financing costs can often be related to inflation
indices and so are the financing costs. Real interest rates relative to inflation tend to be smaller
and more stable than nominal interest rates. There is actually a growing market for inflation-
linked bonds and loans that secure this relationship and thus are attractive to both investors and
borrowers.

A similar approach should be used to recognise foreign exchange risk. Companies select a
reporting currency, ideally the currency in which most assets and liabilities are denominated, most
revenues and costs are incurred, and to which most shareholders are economically tied. When
companies were mostly domestic, the choice was obvious. Now that many companies are truly
international, there may be some doubt about the choice of a suitable reference currency. The
easiest reference currency is the one with which most shareholders are comfortable, because the
goal is to reduce risks for the shareholders. Thus if the majority of shareholders are based in the
UK the preferred reporting currency should be the pound sterling, even if most of the business is
conducted outside the UK; foreign exchange risks should be assessed on a sterling value and
hedges against sterling should be put in place if the risks are deemed excessive.

Commodity and indirect market risks due to competition are often called economic risks or
input/output risks rather than market risks, although many can be directly traced to market
factors such as exchange rates rather than to non-market factors such as innovation, technology
or regulation. They are terribly difficult to recognise and appreciate, but they are important.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 25
The PRM Handbook – III.A.1 Market Risk Management

Even purely domestically based companies may find themselves suddenly uncompetitive because
of a flood of cheap imports brought about by the weakening of some foreign currency relative to
their own domestic currency. It is difficult to appreciate these dangers in advance but critical to
develop some awareness of them rather than ignoring them. An outsider’s view may be
informative. Risk managers can take their cues from equity analysts and rating agencies. They are
experienced in detecting threats to individual companies and company sectors caused by possible
changes in market conditions.

III.A.1.7.3 Assessment
To be approximately correct as a whole is more important than to seek accuracy in some areas
and to ignore others. A key decision for assessing market risks in non-financial institutions is the
choice of time horizon. For example, some companies assess foreign exchange risks purely on
current payables and receivables, that is, to a horizon of perhaps three months. They argue that
only those ‘transactional’ foreign exchange risks can be assessed accurately and hedged
accordingly. A more fundamental question is whether the company is already exposed to
exchange-rate fluctuations beyond this horizon because, for instance, it will not be able to adjust
the price of its products and services within this time frame or it has long-term assets and
liabilities denominated in foreign currencies. The latter has been called ‘translation’ or
‘conversion’ risk. Note that, unlike transaction risk, translation risk has no impact on cash flows
so it is sometimes neglected. Similarly, longer-term transaction risks are sometimes ignored
because they are less immediate, less certain and less precise. But an approximate evaluation of
these further exposures is preferable to total ignorance. It will be a better basis for deciding not
only what hedges to implement in the short term but also what offsets could be taken in the long
term.

Long-term risk assessments extending to several years are indeed indispensable for deciding on
major investments and developing long-term strategic plans. Companies face multiple choices
that are risk-dependent such as where to locate a production facility, whether to outsource some
services, whether to invest in new ventures with payback periods of many years, or whether to
invest more now to maintain flexibility of choice at a later stage.

The assessment of long-term market risks and their potential impact on a firm therefore goes far
beyond the calculation of VaR as carried out in banks. It calls for the application of decision
analysis methods. A decision analysis cycle proceeds as follows. We construct a simple model of
the objective under scrutiny (e.g. maximising the value of the firm) as a function of a few main
market and other risk factors and for a base case strategy. Based on initial estimates of the risk
factors, we calculate a base case value of the objective. Next we explore the sensitivity of the

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 26
The PRM Handbook – III.A.1 Market Risk Management

base case to changes in initial estimates (typically we consider variations in a (subjectively) realistic
range such as a 90% range) to identify the most significant sources of uncertainty as far as the
objective is concerned. We also design alternative strategies that might do better depending on
the evolution of the uncertain factors. At this stage we should understand what are the critical
decisions and the most significant risk factors that would influence the choice of strategy. The
following stage consists of introducing probability distributions to describe our state of
uncertainty about the significant risk factors, and we deduce probability distributions for the
objective value under alternative strategies. A choice of optimal strategy can then be attempted,
taking into consideration the risk attitude of stakeholders in the firm. But among the possible
choices there are often possibilities to acquire more information about some of the sources of
uncertainty or to refine the basic model in order to determine the best strategy with greater
accuracy and thereby to improve the objective. The decision analysis cycle should then be
repeated with the updated information until no more economically valuable information or
refinement can be found.

Note two essential points in this approach. First, the assessment of risks is carried out with the
specific objective of improving decisions. Risk assessment is intimately combined with risk
management. Second, market risk factors are combined with other sources of risks in this type
of analysis. For decision-making, it does not matter what labels are put on risks; it is the
combination of multiple risks and decisions – including responses from competitors – that is
significant. The role of market risk specialists will be to alert management to the existence of
certain risks and to contribute to the description of these risks. They cannot work in isolation;
one risk is often contingent on another. A company may acquire a foreign exchange exposure if
it wins a contract, but may not be sure to win, and even if it wins the foreign exchange exposure
may vary as a function of fluctuating demand. These uncertainties and the corresponding
decisions – pricing the bid, deciding on hedges, etc. – must be analysed simultaneously. Small
firms may lack the expertise to carry out this type of analysis but there is no shortage of
consultancy firms and financial intermediaries ready to help.

III.A.1.7.4 Control/Mitigation
The decision analysis method outlined in the previous section is particularly useful for making
major strategic choices over the medium to long term. For example, a chemical company
producing high-density polypropylene should consider whether naphtha or gas oil would be the
more economical feed. A decision analysis may reveal that, due to uncertainties in the future
costs of these two feeds, it is worthwhile to make the extra investment in a plant that can accept
both feeds. That is a form of long-term market risk management. Likewise, a Japanese car

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 27
The PRM Handbook – III.A.1 Market Risk Management

manufacturer producing cars for the US market may decide to locate production facilities in the
USA to reduce foreign exchange risks rather than in a country with currently lower labour costs.

‘Physical’ long-term solutions limiting exposure to market risks are usually preferable to ‘financial’
hedges that could be considered as alternatives. For example, the chemical company could
consider building a plant suitable for one feed and, in principle, purchase an OTC option on the
excess cost of the second feed relative to the first. Likewise, the Japanese manufacturer could opt
for the country offering the lowest production and delivery cost into the US market and hedge
currency risks by entering into forward exchange contracts. But one should be aware of two
likely problems with long-term financial hedges: liquidity and cash flow. Many financial
derivatives markets are very deep, thus the Japanese manufacturer may find forward contracts in
sufficient sizes to cover exchange-rate risks for the entire economic life of its plant; on the other
hand, commodity derivatives are still relatively thin and it is very unlikely that the chemical
company could find an OTC option to cover its risk over more than a few months.

The cash-flow problem is linked to liquidity. Many financial derivatives are liquid only over a
relatively short term. When used to cover long-term exposures, positions in short-term
derivatives are stacked up and rolled over. At every rollover, expiring contracts must be settled;
between rollovers, margins must be posted. If unlucky, the hedger may accumulate large realised
losses on the short-term contracts against unrealised gains on the initial exposure. The cash-flow
problem thus created may prove fatal. The textbook case is MGRM, the US subsidiary of the
German company Metallgesellshaft. In 1993 MGRM had accumulated positions on 154 million
barrels of crude oil futures on the New York Mercantile Exchange (NYMEX) to hedge long-
term supply contracts of crude oil at fixed prices it had agreed with its customers. Unfortunately
for MGRM, crude oil prices started to decline and futures were in continuo (higher prices than
spot), so that at each monthly rollover MGRM had to pay for the decline in prices of contracts it
had bought a month earlier. By the end of the year the board of Metallgesellschaft decided that
they could no longer afford to support the losses of their subsidiary. MGRM had made a
simplistic calculation resulting in an over-hedge and misjudged the rollover risks and the risk of
holding a very large proportion of the futures contracts (which led NYMEX to call for additional
margins), but, most importantly, they had underestimated the potential cash-flow problem
resulting from hedging 10-year exposures with short-term financial derivatives.

Thus, it is generally considered inappropriate to hedge translation risks or long-term transaction


risks using derivatives. Translation risks relate to revaluation of foreign assets (such as
subsidiaries) and liabilities rather than to cash flows. Traditional hedging strategies with
derivatives would only be applicable if there is a plan to sell these assets or refinance liabilities in

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 28
The PRM Handbook – III.A.1 Market Risk Management

a different currency, thus resulting in cash flows. Most firms prefer instead to hedge translation
risks by matching assets and liabilities in the same currency. For example, a foreign subsidiary
could be funded in the currency of that subsidiary rather than in the home currency. Hedging
economic exposures and long-term transaction exposures with derivatives can also be
problematic not only because of the mismatch between short-term and long-term cash flows but
also because of the uncertainties attached to these future cash flows and the difficulty in
predicting exactly how profits will be affected by a possible market movement. In such cases
operational solutions such as those mentioned earlier can be preferable. Financial derivatives
remain the choice instruments for hedging short-term transaction risks. If the right instruments
are not available on exchanges, firms will find many banks willing to offer tailor-made OTC
products.

But whether market risks in non-financial firms should be hedged at all remains an interesting
question. Some argue against hedging as follows:
(i) Short-term uncertainties will tend to average out naturally over the long term.
(ii) Hedging is costly, and in the long term it only contributes to banks’ profits.
(iii) Shareholders and investors know what the risks are; these risks are already priced in
the market or washed out by diversification.
(iv) If competitors do not hedge a certain risk, it would be unsafe to be too much out of
line: winning on the hedge might not be as favourable as losing would be damaging.

But others argue in favour of hedging that:


(i) Market risks in non-financial companies serve no useful purpose. They are not
chosen with the expectation of deriving a profit. Indeed, there is no risk premium in
the pricing of those market risks that are diversifiable (e.g. currency risks).
(ii) On the contrary, market risks create uncertainties in the performance of business
units and the firm in general. This makes the planning process more difficult,
obscures the true profitability of various activities and confuses the reward scheme.
(iii) If unnecessary risks are eliminated, results become more stable, the debt–equity ratio
can be increased and tax benefits can be reaped.31
(iv) Reduction in risk can also make the firm more attractive to other stakeholders such
as lenders, trade creditors, customers and employees (especially if they hold
executive stock and have poorly diversified portfolios). Greater stability of earnings

31 Research into the use of derivatives by non-financial corporations suggests that derivatives are more
likely to be used by firms with greater leverage. Or, vice versa, firms that reduce their markets risks can
afford a higher leverage and a reduce cost of capital.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 29
The PRM Handbook – III.A.1 Market Risk Management

may mean that lower interest rates apply, trade terms are more advantageous, etc.
This is known as reducing the costs of financial distress.

Different firms may reach different conclusions. Hedging market risks may be less important for
large, diversified, internationally active firms than for smaller, more specialised firms. In large
firms, market risks may already be well diversified and treasury departments may have the
expertise to decide which risks are likely to be beneficial. In small firms, some market risks might
be crippling and should be seen by most stakeholders as unnecessary, avoidable gambles, if only a
proper hedging strategy were put in place.

III.A.1.8 Summary
There is a pervasive view today that market risk management consists essentially in calculating a
value-at-risk. This introduction should help dispel this false impression. Instead, I hope the
reader will have realised that market risks, although relatively well understood, are still hidden in
many places, and any attempt to assess them is based on a large number of assumptions. And, of
course, market risk management does not stop at the assessment phase but should lead to control
and mitigation.

To start with the risk identification phase, one should not forget that there are market risks
hidden in illiquid assets and liabilities that are not evaluated at fair value. It is not because a firm
uses accrual accounting that these risks do not exist. That would be an ostrich-like, head-in-the-
sand attitude. Fortunately, the wider adoption of the new International Accounting Standards
favouring fair value and hedge accounting, the valuation of contingency claims (e.g. executive
share option schemes) and the recognition of assets and liabilities heretofore not affecting
reported company profits (e.g. company pension schemes) will help companies pay attention to
market risks. A short-term effect of these changes may be to increase the volatility of company
returns and make equity investments less attractive, but in the long term it will help better risk
management and should result in a more efficient allocation of resources.

The risk assessment phase is mathematical but relies on the choice of an objective and coherent
set of assumptions. The objective may be to assess the probability of insolvency within a year;
this is what banking supervisors are focusing on. But it may also be to assess some risk-adjusted
performance measure and improve resource allocation accordingly; or it could be any of a
number of other objectives such as developing contingency plans. To each objective corresponds
a reasonable set of assumptions, for instance, a choice of time horizon, whether normal or
extreme market conditions should be considered, whether portfolios should be assumed to be

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 30
The PRM Handbook – III.A.1 Market Risk Management

static or dynamic, whether the business should be regarded as a going concern or whether some
assets should be valued on a fire-sale basis, and so on.

The risk control/mitigation phase follows logically from a choice of objective. Depending on the
business, there may also be a number of constraints, regulatory or otherwise, limiting the level of
acceptable market risk; some of these constraints may be biting. Nonetheless, there cannot be
any logical control/mitigation strategy without a clear objective. Fortunately, the implementation
of hedging and risk control strategies is now a lesser problem because of the existence of a deep,
liquid and efficient market in financial derivatives. There are few market risks that cannot be
adequately covered when there is a wish to do so. New hedging requirements create new
derivatives markets, as we see happening with telecommunications bandwidths and pollution
credits, to name just a couple of new commodity derivatives.

Market risk management is still a relatively new and growing field of expertise. To operate
efficiently, the market risk management function must be independent of risk-taking functions as
well as of the accounting and internal auditing functions, must be able to rely on adequate
resources, must communicate regularly with risk-taking departments and senior management and,
like internal audit, must have reporting lines through a general risk management function up to
the board of directors. The quality of risk management directly affects risk-adjusted performance
measures and, ultimately, shareholder value. As banking regulators remind us, ‘capital should not
be regarded as a substitute for addressing fundamentally inadequate control or risk management
processes’ (BCBS, 2004a, par. 723).

References
Alexander, C (2001) Market Models: A Guide to Financial Data Analysis. Chichester: Wiley.
BCBS (1996) ‘Amendment to the capital accord to incorporate market risks’ (January, modified
September 1997). Available at http://www.bis.org/publ/bcbs.htm
BCBS (2004a) ‘International convergence of capital measurements and capital standards’ (June).
Available at http://www.bis.org/publ/bcbs.htm
BCBS (2004b) ‘Principles for the management and supervision of interest rate risk’ (July).
Available at http://www.bis.org/publ/bcbs.htm
Black, F, and Jones, R (1987) ‘Simplifying portfolio insurance’, Journal of Portfolio Management, Fall,
48–51.
Black, F, and Perold, A F (1992) ‘Theory of constant proportion portfolio insurance’, Journal of
Economic Dynamics and Control, 16, pp. 403–426.
Davis, M H A, and Norman, A R (1990) ‘Portfolio selection with transactions costs’, Mathematics
of Operations Research, 15, pp. 676–713.
Dybvig, P H (1988) ‘Inefficient dynamic portfolio strategies, or How to throw away a million
dollars’, Review of Financial Studies, 1, 67–88.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 31
The PRM Handbook – III.A.1 Market Risk Management

Hodges, S D and Neuberger, A (1989), Optimal replication of contingent claims under


transactions costs,, Review of Futures Markets, 8, pp. 222–239.

Copyright © 2004 Jacques Pézier and The Professional Risk Managers’ International Association 32
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

III.A.2 Introduction to Value at Risk Models


Kevin Dowd and David Rowe1

III.A.2.1 Introduction
Value at risk (VaR) has been the subject of much criticism in recent years. Many of these
criticisms relate to important precautions as to how VaR results should be interpreted as well as
limitations on their use. Other criticisms, however, have been more sweeping, in some cases
dismissing the entire concept as misdirected and wrong-headed. In that context, it is useful to
consider how trading risk limits were determined before VaR became widely accepted.

Market risk arises from mismatched positions in a trading book that is marked to market daily
based on uncertain movements in prices, rates, volatilities and other relevant market parameters.
Market makers cannot operate successfully if they only broker exactly offsetting trades between
customers. To be successful, they need to stand ready to execute trades on demand, and this
inevitably results in open positions being created that are exposed to loss from adverse market
movements. These open positions are hedged in the short run with less than perfect offsets. A
common example is a dealer who executes an interest-rate swap with a customer in which he/she
receives fixed and pays floating and then hedges by shorting government bonds and investing the
proceeds in short-term instruments. There is still basis risk, since the spread between the swap
and bond rates may change, but the major exposure to loss from a general rise in rates has been
eliminated. In the longer term, the dealer will try to attract offsetting customer trades by shading
future quotes to make such offsets attractive to the market. Failing that, the dealer may execute
an offsetting swap with another dealer, although this is less desirable since it requires paying away
a bid or offer spread instead of earning the spread on a customer deal.

Given that running a market-making function inevitably gives rise to market risk, institutions
have always imposed restrictions on traders designed to limit the extent of such risk-taking. Until
the early 1990s these limits were in the form of restrictions on:
x the size of net open positions, including delta equivalent exposures to movements in
underlying rates and prices;
x the degree of maturity mismatch in the net position;
x the permissible amount of negative gamma in option positions;
x exposure to changes in volatility.

1Kevin Dowd is Professor of Financial Risk Management at Nottingham University Business School, UK, and David
Rowe is Group Executive Vice President for Risk Management at SunGard Trading and Risk Systems in London.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 1
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

These limits imposed a complex array of constraints on trader positions. They were difficult to
enforce effectively, not least because risk exposures, such as those based on option gammas,
often move quickly in response to market developments. The information and management
systems of the time also meant that these limits were enforced piecemeal, with all sorts of
inconsistencies and other undesirable results: ‘good’ risks were often passed over because they
ran into arbitrary risk limits, decisions were made with inadequate appreciation of the risks
involved, reducing risk in one area seldom allowed greater risk-taking elsewhere, and so forth.

Perhaps the most important shortcoming of this old system was the absence of integrated risk
management. There was little coherence between the structure or management of the limits and
the range of potential losses that they permitted to occur. Senior management committees
charged with approving such limits were often at the mercy of technicians, and even the
technicians were hard pressed to translate the limits into a consistent measure of risk, let alone
provide an effective system of integrated risk management.

Gradually a consensus arose that what was of fundamental interest to the institution was the
probability distribution of potential losses from traders’ positions, regardless of the exact
structure of those positions. From this realization was born the concept of value at risk. This
gave management a much more consistent way of embedding acceptable levels of risk into the
formal limits within which traders were required to operate. Naturally, there was still a heavy
dependence on market risk technicians to translate market dynamics and the traders’ positions
into estimates of the risks being taken, but VaR did allow a firm’s management to define limits
that reflected a well-considered risk appetite in a way that the pre-existing system did not. In that
sense, one of the most important contributions of VaR has been an improvement in the quality
of the management of risk at the firm-wide level.

III.A.2.2 Definition of VaR


VaR is an estimate of the loss from a fixed set of trading positions over a fixed time horizon that
would be equalled or exceeded with a specified probability. Several details of this definition are
worth emphasizing.
x VaR is an estimate, not a uniquely defined value. In particular, the value of any VaR
estimate will depend on the stochastic process that is assumed to drive the random
realisations of market data. The structure of the random process has to be identified
and the specific parameters of that process must be calibrated. This requires us to
resort to historical experience and raises a whole host of issues such as the length of
the historical sample to be used and whether more recent events should be weighted
more heavily than those further in the past. In essence, the goal is to arrive at the best

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 2
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

possible estimate of the stochastic process driving market data over the specific
calendar period to which the VaR estimate applies. Moreover, it is also clear that
market data are not generated by stable random processes. Differing methods for
dealing with the uncertainty surrounding changes in these random processes are at the
heart of why VaR estimates are not unique.
x The trading positions under review are fixed for the period in question. This raises difficult
questions when the evaluation period is long enough to make this assumption
unrealistic.2 In this instance it is most common to scale up a VaR estimate for a
shorter period on the assumption that market data move independently from day to
day. Otherwise, it is necessary to model trades that mature within the specified time
horizon and make behavioural assumptions relating to trading strategies during the
period.
x VaR does not address the distribution of potential losses on those rare occasions when the VaR
estimate is exceeded. It is never correct to refer to a VaR estimate as the ‘worst-case loss’.
Analysis of the magnitude of rare but extreme losses must invoke alternative tools
such as extreme value theory or simulations guided by historical worst-case market
moves.

The use of VaR involves two arbitrarily chosen parameters – the holding period and the confidence
level. The usual holding period is one day or one month, but institutions can also operate on other
holding periods (e.g., one quarter or more), depending on their investment and/or reporting
horizons. The holding period can also depend on the liquidity of the markets in which an
institution operates. Other things being equal, the ideal holding period appropriate in any given
market is the length of time it takes to ensure orderly liquidation of positions in that market. The
holding period may also be specified by regulation. For example, Basel Accord capital adequacy
rules stipulate that internal model estimates used to determine minimum regulatory capital for
market risk must reflect a time horizon of two weeks (i.e. 10 business days). The choice of
holding period can also depend on other factors:
x The assumption that the portfolio does not change over the holding period is more
easily defended with a shorter holding period.
x A short holding period is preferable for model validation or backtesting purposes:
reliable validation requires a large data set and a large data set requires a short holding
period.

2 One common example of this is the requirement to estimate VaR over a 10-day time horizon for purposes of
calculating regulatory capital for market risk under the Basel Capital Accord.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 3
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

The choice of confidence level depends mainly on the purpose to which our risk measures are
being put. Thus, a very high confidence level, often as great at 99.97%, is appropriate if we are
using risk measures to set capital requirements and wish to achieve a low probability of
insolvency or a high credit rating. Indeed, the confidence levels required for these purposes can
be higher than those needed to meet regulatory capital requirements. On the other hand, for
backtesting and model validation, relatively lower confidence levels are desirable to get a
reasonable proportion of excess-loss observations. For limit-setting, most institutions prefer
confidence levels low enough that actual losses exceed the corresponding VaR estimate
somewhere between two and twelve times per year (implying a daily VaR confidence level of
95% to 99%). This forces policy committees to take the size of the limit seriously, since losses
over that limit can occur with a reasonable likelihood.

For the above reasons, among others, the ‘best’ choice for these parameters depends on the
context. What is important is that the choices be clear in every context and be thoroughly
understood throughout the institution so that limit-setting and other risk-related decisions are
made in light of this common understanding.

III.A.2.3 Internal Models for Market Risk Capital


The original Basel Capital Accord was put into effect at the beginning of 1988. It set down rules
for calculating minimum regulatory capital for banks based on a simple set of multipliers applied
to credit-risky assets. The minimum capital calculation did not reflect risks associated with a
bank’s mark-to-market trading activities, which were still quite small.

However, market risk factors were becoming more important constituents of the risk profile of
most major money centre banks, and in the 1990s the Basel Accord was amended to reflect
banks’ exposure to market risk. In a significant departure from traditional conventions, the new
Amendment also allowed banks to employ their own internal VaR models to calculate their
minimum regulatory capital for market risk. This permission was conditional on several
requirements:
x The models and their surrounding technical and organisational infrastructure had to be
reviewed and approved by the bank’s supervisor.
x The model used for calculating regulatory capital had to be the same one used for day-
to-day internal risk management (the so-called ‘use test’).
x The VaR confidence level used in the regulatory capital calculation had to be 99%.
x The time horizon for the regulatory capital calculation had to be two weeks (i.e. 10
business days).

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 4
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Assuming the above conditions were met, the capital requirement was to be at a level at least 3.0
times the 10-day VaR estimate averaged over the last 60 business days for any reporting period.
Supervisors retained the prerogative of applying a larger multiplier if the results of backtesting
exercises suggested that internal models were generating insufficiently high VaR estimates.

The extended holding period presented some questions. How should positions that matured
during this time horizon be handled? What about trades likely to be booked to correct for the
erosion of initial hedges due to ageing of the portfolio?

The amended Accord also offered banks a solution to many of these problems, which almost all
of them adopted. This was to apply the ‘square root of time’ rule. That is, banks obtained 10-day
VaR estimates by multiplying the daily VaRs by —10 § 3.16228. This procedure effectively says
that if traders took the same level of risk as indicated by the one-day VaR estimate for 10
consecutive days, the 10-day VaR estimate would be 3.16228 times the daily VaR. This rule is
based on the formula for the distribution of the sum of random variables, assuming that daily
returns are independent of each other (see Chapter III.A.3). Assuming a static portfolio for one
day avoids most of the complications described above for longer time horizons. Moreover, in so
far as even a one-day static portfolio is unrealistic, the consequences of this assumption will
(hopefully!) be evident from the backtests on the VaR model that are described in Section
III.A.2.8.

III.A.2.4 Analytical VaR Models


The assumption that holding period returns (i.e. h-day relative changes in value) are normally
distributed provides us with a straightforward formula for value at risk. If our h-day returns R are
normally distributed with mean µ and standard deviation Ƴ, we write
R a N(µ, Ƴ2) (III.A.2.1)

as in Section II.E.4.4.1. Now if the portfolio is currently worth S, our h-day VaR at the
confidence level 100(1  ơ)% is given by
VaR h,ơ = –xơS (III.A.2.2)

where xơ is the lower ơ percentile of the distribution N(µ, Ƴ2). That is, xơ is the number such that the
probability that R < xơ = ơ(see Section II.E.4.4.3).Since we require a fairly high degree of
confidence, ơ is small (normally 0 < ơ < 0.1). Thus xơ will typically be negative. In fact, using the
standard normal transformation (II.E.28) we can write Zơ = (xơ – µ)/Ƴ. In other words,
xơ =ZơƳ + µ     (III.A.2.3)

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 5
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

where Zơ is the lower ơ percentile of the standard normal distribution. This can be obtained from
standard statistical tables or from spreadsheet functions, such as the NORMSINV function in
Excel. For instance, typing ‘=NORMSINV(0.05)’ into Excel gives the value –1.64485 for Z0.05

Putting together (III.A.2.2) and (III.A.2.3), we have derived the following simple analytic formula
for VaR that is valid under assumption (III.A.2.1):
VaR h,ơ = (ZơƳ + µ)S. (III.A.2.4)

Estimating VaR at a given probability using the normal distribution is very easy, once we have an
estimate of the mean and standard deviation, as the following example shows.

Example III.A.2.1: Analytic VaR calculation


Suppose we are interested in the normal VaR at the 95% confidence level and a holding period of
1 day and we estimate µ and Ƴ over this horizon to be 0.005 and 0.02, respectively. Now
(III.A.2.4) tells us that for a portfolio worth $1 million,
VaR 1, 0.05 = (0.005  1.64485 u 0.02) × $1 million = $27,897.

Note that the higher the confidence level, the greater the VaR. For instance, if we were interested
in the corresponding VaR at the 99% confidence level, our VaR would be3
VaR 1, 0.01 = (0.005  2.32634 u 0.02) × $1 million = $41,527.

Applying the square root of time rule (explained in Chapter III.A.3), the corresponding VaRs
over a 10-day holding period are:
VaR 10, 0.05 = —10 VaR 1, 0.05 § 3.16228 × $27,897 = $88,218,
VaR 10, 0.01 = —10 VaR 1, 0.01 § 3.16228 × $41,527 = $131,320.

Analytical approaches provide the simplest and most easily implemented methods to estimate
VaR. They rely on parameter estimates based on market data histories that can be obtained from
commercial suppliers or gathered internally as part of the daily mark-to-market process. For
active markets, vendors such as RiskMetricsTM supply updated estimates of the volatility and
correlation parameters themselves.

But while simple and practical as rough approximations, analytic VaR estimates also have
shortcomings. Perhaps the most important of these is that many parametric VaR applications are
based on the assumption that market data changes are normally distributed, and this assumption

3 ‘=NORMSINV(0.01)’ in Excel gives –2.32634.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 6
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

is seldom correct in practice. Assuming normality when our data are heavy-tailed can lead to
major errors in our estimates of VaR. VaR will be underestimated at relatively high confidence
levels and overestimated at relatively low confidence levels. Further discussion on this will be
given in Chapter III.A.3.
But analytical approaches can also be unreliable for other reasons:
x Market value sensitivities often are not stable as market conditions change. Since VaR
is often based on fairly rare, and hence fairly large, changes in market conditions, even
modest instability of the value sensitivities can result in major distortions in the VaR
estimate. Such distortions are magnified when options are a significant component of
the positions being evaluated, since market value sensitivities are especially unstable in
that situation.
x Analytic VaR is particularly inappropriate when there are discontinuous payoffs in the
portfolio. This is typical of transactions like range floaters and certain types of barrier
options.

In summary, analytic approaches provide a reasonable starting point for deriving VaR estimates,
but should not be pushed too hard. They may be acceptable on a long-term basis if the risks
involved are small relative to a firm’s total capital or aggregate risk appetite, but as the magnitude
of risk increases, and as positions become more complex, and especially more nonlinear, more
sophisticated approaches are necessary to provide reliable VaR estimates.

III.A.2.5 Monte Carlo Simulation VaR


Fortunately, many problems that cannot be handled by analytical methods are quite amenable to
simulation methods. For example, we might have stochastic processes that exhibit jumps or
certain types of heavy tails that do not allow an analytical solution for our VaR, or the values of
the instruments in our portfolio might be ‘complicated’ functions of otherwise straightforward
risk factors, as is often the case with exotic options. Alternatively, our portfolio might be a
collection of heterogeneous instruments, whose payoffs interact in ways that cannot be handled
using analytical methods. And there again, we might have simple positions that can be handled
using analytical methods, but are better handled using simulation. A good case in point would be
a portfolio of long straddles: these options are simple, but their maximum loss occurs when there
is no market movement at all. Although the VaR of such a position can be obtained using
analytical methods, we have to be careful which analytical methods we apply. For example, delta-
gamma methods, which are often used for options VaR, can be very treacherous when applied to
such positions because they assume that the maximum loss occurs when underlying variables
exhibit large moves. (These methods are discussed in Section III.A.2.7.6.) In such cases, we
might prefer to use simulation methods, because we know they are reliable.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 7
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

In these and similar circumstances, the most natural approach is to use Monte Carlo simulation,
which is a very powerful method that is tailor-made for ‘complex’ or ‘difficult’ problems. The
essence of this approach is first to define the problem – specify the random processes for the risk
factors of the portfolio, the ways in which they affect our portfolio, and so forth – and then
simulate a large number of possible outcomes based on these assumptions. Each simulation ‘trial’
leads to a possible profit/loss (P/L). If we simulate enough trials, we can then produce a
simulated density for our P/L, and we can read off the VaR as a lower percentile of this density.
In the following when the context is clear we drop the notation for the dependence of VaR on
holding period h and confidence level 100(1-ơ)%, writing simply ‘VaR’ for VaR h, ơ.

III.A.2.5.1 Methodology
To illustrate, suppose we wish to carry out a Monte Carlo analysis of a stock price S, and we
assume that S follows a geometric Brownian motion process:
dS/S = µdt + ƳdW (III.A.2.5)

where µ is its expected (per unit time) rate of return and Ƴ is the spot volatility of the stock price.
dW is known as a Wiener process, and can be written as dW = ƶ(dt)1/2, where ƶ is a drawing from a
standard normal distribution. Substituting for dW, we get
dS/S = µdt + Ƴƶ(dt)1/2.

This is the standard stock-price model used in quantitative finance. The (instantaneous) rate of
change in the stock price dS/S evolves according to its drift term µdt and realisations from the
random term ƶ. In practice, we would often work with this model in its discrete-form equivalent.
If ƅt is some small time increment, we approximate (III.A.2.5) by

ƅS / S µƅt  Ƴƶ ƅt (III.A.2.6)


where ƅS is the change in the stock price over the time interval ƅt, and ƅS/S is its (discretised)
rate of change.

Note that (III.A.2.6) assumes that the rate of change of the stock price is normally distributed with

mean µƅt and standard deviation Ƴ ƅt . Hence our criticisms of analytic VaR with respect to
the normality assumption will also apply to the Monte Carlo VaR methodology, unless we
employ an assumption for the underlying dynamics that is more appropriate than the geometric
Brownian motions with constant volatility (III.A.2.5).
Now suppose that we wish to simulate the stock price over some period of length T. We would
usually divide T into a large number N of small time increments ƅt (i.e. we set ƅt = T/N). We

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 8
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

take a starting value of S, say S(0), and draw a random value of ƶ to update S using (III.A.2.6);
this gives the change in the stock price over the first time increment, and we repeat the process
again and again until we have changes in the stock price over all N increments. At this point, we
have simulated the path of the stock price over the whole period T. We can then repeat the
exercise many times and produce as many simulated price paths as we wish.

Some illustrative simulated price paths are shown in Figure III.A.2.1. We assume here that the
starting value of our stock price, S(0), equals 1, so each path starts from the 1 on the y-axis.
Thereafter the paths typically diverge, moving randomly in accordance with their ‘laws of motion’
as given in the above equations. Moreover, since µ is assumed to be positive, there is a tendency
for the stock prices to ‘drift’ upwards. The degree of dispersion of the simulated stock prices –
the extent to which they move away from each other over time – is governed by the volatility Ƴ.
The bigger is Ƴ, the more dispersed the stock prices will be at any point in the simulation. Note,
too, that the simulated terminal stock prices will tend to approach the ‘true’ distribution of
terminal stock prices as the number of draws grows larger. Even in this figure, which has only a
limited number of paths, we can see that most of the terminal values are clustered around a
central value, with relatively few in the tails. If we want to obtain a simulated terminal distribution
which is close to the true distribution, all we need to do is carry out a large number of simulation
trials. The larger the number of trials, the closer is the simulated terminal distribution to the true
terminal distribution.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 9
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Figure III.A.2.1: Some simulated stock price paths

Note: Based on 15 Monte Carlo trials using parameters µ=0.05, Ƴ=0.10, T=1 and 30 step increments.

If we wish, we can estimate the VaR of the stock price by simulating a large number of terminal
stock prices S(T). We then read the VaR from the histogram of S(T) values so generated. To
illustrate, Figure III.A.2.2 shows the histogram of simulated S(T) values from 10,000 simulation
trials, using the same stock price parameters as in Figure III.A.2.1.

The shape of the histogram is close to a lognormal – which it should be, as the stock price is
assumed to be lognormally distributed. The figure also shows the 5th percentile of the simulated
stock price histogram. This percentile is equal to 0.420, indicating that there is a 5% probability
that the initial stock price (of 1) could fall to 0.420 or less over the period, given the parameters
assumed. A terminal stock price of 0.420 corresponds to a loss equal to 1 – 0.420 = 0.580, so we
can say that the VaR at the 95% confidence level is 0.580. This example illustrates how easy it is
to estimate VaR using Monte Carlo simulation.

In addition, Monte Carlo simulation can easily handle problems with more than one random risk
factor (see Section II.D.4.2).

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 10
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Figure III.A.2.2: Histogram of simulated terminal stock prices

III.A.2.5.2 Applications of Monte Carlo simulation


Monte Carlo methods have many applications in market risk measurement and would be the
preferred method in almost any ‘complex’ risk problem. Examples of such problems include the
following, among many other possibilities:
x We might be dealing with underling risk factors that are ‘badly behaved’ in some way
(e.g. because they jump or show heavy tails) or we might have a mixture of
heterogeneous risk factors. For example, we might have credit-related risk factors as
well as normal market risk factors, and the credit risk factors cannot be modelled as
normal.
x We might have a portfolio of options. In such cases, the value of the portfolio is a
nonlinear (or otherwise difficult) function of underlying risk factors, and might be
impossible to handle using analytical methods even if the risk factors are themselves
‘well behaved’.
x We might be dealing with instruments with complicated risk factors, such as
mortgages, credit derivatives, and so forth.
x We might have a portfolio of heterogeneous instruments, the heterogeneity of which
prevents us from applying an analytical approach. For example, our portfolio might be
a collection of equities, bonds, foreign exchange options, and so forth.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 11
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

III.A.2.5.3 Advantages and Disadvantages of Monte Carlo VaR


Monte Carlo simulation has many advantages over analytical approaches to calculating VaR:
x It can capture a wider range of market behaviour.
x It can deal effectively with nonlinear and path dependent payoffs, including the
payoffs to very complicated financial instruments.
x It can capture risk that arises from scenarios that do not involve extreme market
moves.
x Conversely, it can provide detailed insight into the impact of extreme scenarios that
lie well out in the tails of the distributions, beyond the usual VaR cutoff.
x It lends itself easily to evaluating specific scenarios that are deemed worrisome based
on geopolitical or other hard-to-quantify considerations.

The biggest drawbacks to the Monte Carlo approach to VaR estimation are that it is computer
intensive and it requires great care to be sure all the details of the calculation are executed
correctly. Non-technicians also find the process of imposing historically consistent characteristics
on the scenarios to be quite impenetrable. Hence Monte Carlo VaR estimates are often viewed as
coming from a black box whose credibility rests solely on the reputation of the technicians
responsible for producing them. Nevertheless, Monte Carlo simulation is the most widely used
approach to VaR estimation, and its popularity is likely to grow further as computers become
more powerful and simulation software becomes more user-friendly. For large sophisticated
trading operations, the only other widely used approach is historical simulation, to which we now
turn.

III.A.2.6 Historical Simulation VaR


Historical simulation is a very different approach to VaR estimation. The idea here is that we
estimate VaR without making strong assumptions about the distribution of returns. We try to let
the data speak for themselves as much as possible and use the recent empirical return distribution
– not some assumed theoretical distribution – to estimate our VaR. This type of approach is
based on the underlying assumption that the near future will be sufficiently like the recent past
that we can use the data from the recent past to estimate risks over the near future – and this
assumption may or may not be valid in any given context.

III.A.2.6.1 The Basic Method


In applying basic historical simulation, we first construct a hypothetical P/L series for our current
portfolio over a specified historical period. This requires a set of historical P/L or return
observations on the positions currently held. These P/Ls or returns will be measured over a
standard time interval (e.g. a day) and we want a reasonably large set of historical observations

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 12
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

over the recent past. Suppose we have a portfolio of n assets, and for each asset i we have the
observed return for each of T intervals in our historical sample period. (Our ‘portfolio’ could
equally well include a collection of liabilities and/or instruments such as swaps, but we talk of
assets for convenience.) If ri,t is the return on asset i in sub-period t, and if Ai is the amount
currently invested in asset i, then the simulated P/L of our current portfolio in sub-period t is:
n
(P/L)t ¦Ar
i 1
i i ,t .

Calculating this for all t gives us the hypothetical P/L for our current portfolio throughout our
historical sample. This series will not be the same as the P/L actually earned on our portfolio in
each of those periods because the portfolio actually held in each historical period will virtually
never match our current positions.

Having obtained our hypothetical P/L data set, we can estimate VaR by plotting the data on a
simple histogram and then reading off the appropriate percentile. To illustrate, suppose we have
1000 hypothetical daily observations in our P/L series (approximately four years of data for
about 250 business days per year) and we plot the histogram shown in Figure III.A.2.3. If we take
our VaR confidence level to be 95%, our VaR is given implicitly by the x-value that cuts off the
bottom 5% of worst P/L outcomes from the rest of the distribution. In this particular case, this
x-value (or the 5th percentile point of the P/L histogram) is –1.604. The VaR at the 95%
probability is the negative of this percentile, and is therefore 1.604.

Figure III.A.2.3: Historical simulation VaR

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 13
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

III.A.2.6.2 Weighted historical simulation


One of the most important features of basic historical simulation is the way it weights past
observations. Our historical simulation P/L series is constructed in a way that gives any
observation the same weight on P/L provided it is less than n periods old, and a zero weight if it is
older than that. However, a problem with this is that it is hard to justify giving each observation
in our sample period the same weight, regardless of age, market volatility, or anything else. For
example, it is well known that natural gas prices are usually more volatile in the winter than in the
summer, so a raw historical simulation approach that incorporates both summer and winter
observations will tend to average the summer and winter P/L values together. As a result,
treating all observations as having equal weight will tend to underestimate true risks in the winter,
and overestimate them in the summer (see Shimko et al., 1998). This weighting structure also
creates the potential for ghost effects – we can have a VaR that is unduly high (low) because of a
short period of high (low) volatility, and this VaR will continue to be high (low) until n days or so
have passed and the observations have fallen out of the sample period. At that point, the VaR
will fall (rise) again, but the fall (rise) in VaR is only a ghost effect created by the weighting
structure and the length of sample period used. More detailed discussion of this point is left to
Chapter III.A.3.

We can ameliorate these problems by suitably weighting our observations. In the natural gas case
just considered, we might give the winter observations a higher weight than summer observations
if we are estimating a winter VaR, and vice versa for a summer VaR.

Alternatively, we might believe that newer observations in our sample are more informative than
older ones, and in this case we might age-weight our data so that older observations in our
historical simulation sample have a smaller weight than more recent ones (see Boudoukh et al.,
1998).4 To implement an ‘age-weighted’ historical simulation, we begin by ordering our returns,
worst return first. We then note the age of each return observation, and calculate a suitable age-
related weight for each return. A good way to do so is to use an exponentially weighted moving average
(EWMA). We choose a decay parameter ƫ, which indicates how much each observation’s weight
decays from one day to the next. Again, a more detailed discussion of this methodology is left to
Chapter III.A.3.

A worked example is provided in Table III.A.2.1 and in the Workbook entitled Age-Weighted
Historical Simulation. The first column gives the ordered returns, and the second gives the age of
the corresponding return observation in days. For illustration only we assume an unrealistically

4 This approach takes account of the loss of information associated with older data and is easy to implement. However,

it can aggravate the problem of limited data in the tails of the distribution.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 14
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

small sample of only 150 observations. Basic historical simulation gives each return a weight of
0.00667, implying cumulative weights of 0.00667, 0.01333, 0.02000, etc. The historical simulation
confidence level is 1 minus the cumulative weight plus 0.00667, and the VaR is the negative of
the relevant observation. So, for example, in this case the historical simulation VaR at the 95%
confidence level is 2.530% of the portfolio size, the point half way between the eighth and ninth
worst simulated losses. However, if we apply exponential weighting using ƫ= 0.97, we get the
weights given in the column headed ‘AW weight’. These are rather different from the historical
simulation equal weights, and give more recent observations higher weights. The cumulative
weights given in the next column are 0.02684, 0.04057, and so on. The rest of the analysis then
proceeds as before. In this case, the EWMA weighted historical simulation VaR at the 95%
confidence level is 2.659% of the portfolio value, falling between the fourth and fifth worst
simulated losses. The effect of exponential weighting in this case is to raise the estimated VaR
since some of the largest simulated losses occur relatively recently in the sample period.

Table III.A.2.1 also shows the same analysis conducted 25 days later. To illustrate the point, we
have made the simplifying assumption that the positions are the same as on the first day so that
all the simulated historical P/L values for any given calendar day are unchanged. We also assume
that there are no large losses in the intervening 25 days, so that the worst simulated losses are the
same as those recorded in the analysis at the initial date. Now, however, these observations have
aged and their weights are lower, reflecting the observations’ greater age. Consequently, the
cumulative weights are also lower and the impact is to increase the ‘effective’ confidence level for
any given return observation. This, in turn, leads to a lower VaR for any given confidence level.
In this case, the VaR at the 95% confidence level is 2.503%, a value interpolated between the
ninth and tenth worst simulated losses. By contrast, the historical simulation VaR has remained
unchanged because the historical simulation weights are unaltered.

Table III.A.2.1: Age-weighted historical simulation


Analysis at Initial Date
Basic historical simulation
Ordered daily return Age HS weight HS cum. weight cl VaR at chosen cl 95% VaR
-3.50% 5 0.00667 0.00000 1.00000 3.500%
-3.00% 27 0.00667 0.00667 0.99333 3.000%
-2.80% 55 0.00667 0.01333 0.98667 2.800%
-2.70% 65 0.00667 0.02000 0.98000 2.700%
-2.65% 30 0.00667 0.02667 0.97333 2.650%
-2.60% 50 0.00667 0.03333 0.96667 2.600%
-2.57% 45 0.00667 0.04000 0.96000 2.570%
-2.55% 10 0.00667 0.04667 0.95333 2.550%
-2.51% 6 0.00667 0.05333 0.94667 2.510% 2.530%
-2.48% 17 0.00667 0.06000 0.94000 2.480%
-2.45% 24 0.00667 0.06667 0.93333 2.450%

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 15
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Age-weighted historical simulation


AW weight AW cum. weight cl VaR at chosen cl 95% VaR
0.02684 0.00000 1.00000 3.500%
0.01373 0.02684 0.97316 3.000%
0.00585 0.04057 0.95943 2.800%
0.00432 0.04642 0.95358 2.700%
0.01253 0.05074 0.94926 2.650% 2.659%
0.00681 0.06327 0.93673 2.600%
0.00794 0.07008 0.92992 2.570%
0.02305 0.07802 0.92198 2.550%
0.02603 0.10107 0.89893 2.510%
0.01862 0.12710 0.87290 2.480%
0.01505 0.14572 0.85428 2.450%

Analysis 25 Days Later


Basic historical simulation
Ordered daily return Age HS weight HS cum. weight cl VaR at chosen cl 95% VaR
-3.50% 30 0.00667 0.00000 1.00000 3.500%
-3.00% 52 0.00667 0.00667 0.99333 3.000%
-2.80% 80 0.00667 0.01333 0.98667 2.800%
-2.70% 90 0.00667 0.02000 0.98000 2.700%
-2.65% 55 0.00667 0.02667 0.97333 2.650%
-2.60% 75 0.00667 0.03333 0.96667 2.600%
-2.57% 70 0.00667 0.04000 0.96000 2.570%
-2.55% 35 0.00667 0.04667 0.95333 2.550%
-2.51% 31 0.00667 0.05333 0.94667 2.510% 2.530%
-2.48% 42 0.00667 0.06000 0.94000 2.480%
-2.45% 49 0.00667 0.06667 0.93333 2.450%

Age-weighted historical simulation


AW weight AW cum. weight cl VaR at chosen cl 95% VaR
0.01253 0.00000 1.00000 3.500%
0.00641 0.01253 0.98747 3.000%
0.00273 0.01894 0.98106 2.800%
0.00202 0.02168 0.97832 2.700%
0.00585 0.02369 0.97631 2.650%
0.00318 0.02954 0.97046 2.600%
0.00371 0.03273 0.96727 2.570%
0.01076 0.03643 0.96357 2.550%
0.01216 0.04719 0.95281 2.510%
0.00870 0.05935 0.94065 2.480% 2.503%
0.00703 0.06805 0.93195 2.450%

If we are concerned about changing volatilities, we can also weight our data by contemporaneous
volatility estimates. The key idea – suggested by Hull and White (1998) – is to update return
information to take account of changes in volatility. So, for example, if the current volatility in a
market is 2% a day, and it was only 1% a day a month ago, then data a month old understate the
changes we can expect to see tomorrow. On the other hand, if last month’s volatility was 1% a
day but current volatility is 0.5% a day, month-old data will overstate the changes we can expect
tomorrow.
Now suppose we are interested in forecasting VaR for day T. Again let ri,t be the historical return
in asset i on day t in our historical sample, Ƴi,t be a forecast of the volatility of the return on asset i

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 16
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

for day t, made at the end of day t – 1, and Ƴi,T be our most recent forecast of the volatility of
asset i. We then replace the returns in our data set, ri,t, with volatility-adjusted returns, given by

Ƴ i ,T
ri *,t ri ,t .
Ƴ i ,t

Actual returns in any period t are therefore increased (decreased), depending on whether the
current forecast of volatility is greater (smaller) than the estimated volatility for period t. We now
calculate the historical simulation P/L as explained in Section III.A.2.6.1, but with
r i *,t substituted in place of the original data set ri,t.

The calculations involved are illustrated in Table III.A.2.2 and in the Workbook entitled Vol-
weighted Historical Simulation. Using the same returns as in Table III.A.2.1, this table shows two
cases. In the first case, current daily volatility is 1.4%, generally above the range of
contemporaneous daily volatilities over the historical dates with the worst losses. Most of the
volatility weights are therefore greater than 1.0 and the volatility-adjusted changes in the portfolio
are correspondingly greater (in absolute value) than the simulated historical changes. The
confidence levels are the same as before. In this case, the effect of the volatility weighting is to
increase the ‘effective’ changes and thus to increase the estimated VaR. In the second case, we
have the same contemporaneous volatilities but a current volatility of only 0.8%. The volatility
weights are now generally less than 1.0, and the ‘effective’ or volatility-adjusted changes are
correspondingly reduced as is the estimated VaR. Note that the volatility weighting will generally
alter the specific historical dates corresponding to the critical confidence level depending on the
pattern of contemporaneous volatility. These dates are invariant, however, to the value of
current volatility.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 17
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Table III.A.2.2: Volatility weighted vs. equal weighted historical simulation


Current volatility generally above contemporaneous historical volatility
Basic historical simulation

Ordered Cumulative VaR at 95% VaR


Daily Return Weight cl Chosen cl
-3.50% 0.00000 1.00000 3.50%
-3.00% 0.00667 0.99333 3.00%
-2.80% 0.01333 0.98667 2.80%
-2.70% 0.02000 0.98000 2.70%
-2.65% 0.02667 0.97333 2.65%
-2.60% 0.03333 0.96667 2.60%
-2.57% 0.04000 0.96000 2.57%
-2.55% 0.04667 0.95333 2.55%
-2.51% 0.05333 0.94667 2.51% 2.53%
-2.48% 0.06000 0.94000 2.48%
-2.45% 0.06667 0.93333 2.45%

Vol-weighted historical simulation

Contemp Ordered Vol- VaR at 95% VaR


Daily return Volatility Current vol Vol weight adjusted return cl Chosen cl
-3.50% 0.82% 1.40% 1.7073 -5.98% 1.00000 5.98%
-2.70% 0.80% 1.40% 1.7500 -4.73% 0.99333 4.73%
-2.80% 0.90% 1.40% 1.5556 -4.36% 0.98667 4.36%
-2.51% 0.85% 1.40% 1.6471 -4.13% 0.98000 4.13%
-2.60% 0.95% 1.40% 1.4737 -3.83% 0.97333 3.83%
-2.55% 1.00% 1.40% 1.4000 -3.57% 0.96667 3.57%
-2.48% 1.05% 1.40% 1.3333 -3.31% 0.96000 3.31%
-2.57% 1.10% 1.40% 1.2727 -3.27% 0.95333 3.27%
-3.00% 1.30% 1.40% 1.0769 -3.23% 0.94667 3.23% 3.25%
-2.45% 1.25% 1.40% 1.1200 -2.74% 0.94000 2.74%
-2.65% 1.50% 1.40% 0.9333 -2.47% 0.93333 2.47%

Current volatility generally below contemporaneous historical volatility


Basic historical simulation

Ordered Cumulative VaR at 95% VaR


Daily Return Weight cl Chosen cl
-3.50% 0.00000 1.00000 3.50%
-3.00% 0.00667 0.99333 3.00%
-2.80% 0.01333 0.98667 2.80%
-2.70% 0.02000 0.98000 2.70%
-2.65% 0.02667 0.97333 2.65%
-2.60% 0.03333 0.96667 2.60%
-2.57% 0.04000 0.96000 2.57%
-2.55% 0.04667 0.95333 2.55%
-2.51% 0.05333 0.94667 2.51% 2.53%
-2.48% 0.06000 0.94000 2.48%
-2.45% 0.06667 0.93333 2.45%

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 18
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Vol-weighted historical simulation

Contemp Ordered Vol- VaR at 95% VaR


Daily return Volatility Current vol Vol weight adjusted return cl Chosen cl
-3.50% 0.82% 0.80% 0.97561 -3.41% 1.00000 3.41%
-2.70% 0.80% 0.80% 1.00000 -2.70% 0.99333 2.70%
-2.80% 0.90% 0.80% 0.88889 -2.49% 0.98667 2.49%
-2.51% 0.85% 0.80% 0.94118 -2.36% 0.98000 2.36%
-2.60% 0.95% 0.80% 0.84211 -2.19% 0.97333 2.19%
-2.55% 1.00% 0.80% 0.80000 -2.04% 0.96667 2.04%
-2.48% 1.05% 0.80% 0.76190 -1.89% 0.96000 1.89%
-2.57% 1.10% 0.80% 0.72727 -1.87% 0.95333 1.87%
-3.00% 1.30% 0.80% 0.61538 -1.85% 0.94667 1.85% 1.86%
-2.45% 1.25% 0.80% 0.64000 -1.57% 0.94000 1.57%
-2.65% 1.50% 0.80% 0.53333 -1.41% 0.93333 1.41%

III.A.2.6.3 Advantages and Disadvantages of Historical Approaches


Historical simulation methods have both advantages and disadvantages. The advantages are:
x They are intuitive and conceptually simple, providing results that are easy to
communicate to senior managers and interested outsiders (e.g. bank supervisors or
rating agencies).
x Dramatic historical events (sometimes irreverently referred to as ‘the market’s
greatest hits’) can be simulated and the results presented individually even when they
pre-date the current historical sample. Thus the hypothetical impact of extreme market
moves that are strongly remembered by senior management can remain permanently
in the information presented, although not directly included in the VaR number.
x Historical simulation approaches are, in varying degrees, fairly easy to implement on a
spreadsheet and can accommodate any type of position, including derivatives
positions.
x They use data that are (often) readily available, either from public sources (e.g.
Bloomberg) or from in-house data sets (e.g. collected as a by-product of marking
positions to market).
x Since they do not depend on parametric assumptions about the behaviour of market
variables, they can accommodate heavy tails, skewness, and any other non-normal
features that can cause problems for parametric approaches, including Monte Carlo
simulation.
x Historical simulation approaches can be modified to allow the influence of
observations to be weighted (e.g. by season, age, or volatility).
x There is a widespread perception among risk practitioners that historical simulation
works quite well empirically, although formal evidence on this issue is inevitably
mixed.
The weaknesses of historical simulation stem from the fact that results are completely dependent
on the data set. This can lead to a number of problems:

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 19
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

x If our data period was unusually quiet (or unusually volatile) and conditions have
recently changed, historical simulation will tend to produce VaR estimates that are too
low (high) for the risks we are actually facing.
x Historical simulation approaches have difficulty properly handling shifts that took
place during our sample period. For example, if there is a permanent change in
exchange-rate risk, it will take time for standard historical simulation VaR estimates to
reflect the new conditions. Similarly, historical simulation approaches are sometimes
slow to reflect major events, such as the increases in risk associated with sudden
market turbulence.
x Most forms of historical simulation are subject to distortions from ghost effects
stemming from updates of the historical sample.
x In general, historical simulation estimates of VaR make no allowance for plausible
events that might occur but did not actually occur in our sample period.

There can also be problems associated with the length of our data period. We need a long data
period to have a sample size large enough to get risk estimates of acceptable precision. Without
this, VaR estimates will fluctuate over time so much that limit-setting and risk-budgeting
becomes very difficult. On the other hand, a very long data period can also create its own
problems:
x The longer the data set, the bigger the problem with aged data.
x The longer the sample period, the longer the period over which results will be
distorted by past events that are unlikely to recur, and the longer we will have to wait
for ghost effects to disappear.
x The longer the sample size, the more the news in current market observations is likely
to be drowned out by older observations – and the less responsive will be our risk
estimates to current market conditions.
x A long sample period can lead to data-collection problems. This is a particular
concern with new or emerging market instruments, where long runs of historical data
do not exist and are not necessarily easy to proxy.5

In practice, our main concerns are usually to obtain a long-enough run of historical data. As a
broad rule of thumb, many practitioners point to the Basel Committee’s recommendations for a
minimum number of observations, requiring at least a year’s worth of daily observations (i.e. 250

5 However, this problem is not unique to the historical simulation approach. Parametric approaches need a reasonable

history if they are to use estimates (rather than just guesstimates or ‘expert judgements’) of the relevant parameters
assumed to be driving the distributions. For historical simulation VaR it is sometimes possible to synthesize proxy data
for markets prior to their existence based on their behaviour over a more recent sample period, but when doing so it is

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 20
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

observations, at 250 trading days to the year). However, such a small sample size is far too small
to ensure that an historical simulation approach will give accurate and robust results. In addition,
as the confidence level rises, with a fixed length sample, the historical simulation VaR estimator is
effectively determined by fewer and fewer observations and therefore becomes increasingly
sensitive over time to small numbers of observations. For example, at the Basel mandated
confidence level of 99%, the historical simulation VaR estimator is determined by the most
extreme two or three observations in a one-year sample, and this is hardly sufficient to give us a
precise VaR estimate.

III.A.2.7 Mapping Positions to Risk Factors


Portfolio P/L is derived from the P/L of individual positions, and we have assumed up to now
that we are able to model the latter directly. However, it is not always possible or even desirable
to model each and every position in this manner. In practice, we project our positions onto a
relatively small set of risk factors. This process of describing positions in terms of these standard
risk factors is known as ‘risk factor mapping’. We engage in mapping for three reasons:
x We might not have enough historical data for some positions. For instance, we might
have an emerging market security that has a very short track record or we might have
a new type of over-the-counter instrument that has no track record at all. In such
circumstances it may be necessary to map our security to some index and the over-
the-counter instrument to a comparable instrument for which we do have sufficient
data.
x The dimensionality of our covariance matrix of risk factors may become unworkably
large. If we have n different instruments in our portfolio, we would need data on n
separate volatilities, one for each instrument, plus data on n(n – 1)/2 correlations6 – a
total of n(n + 1)/2 pieces of information. As n increases, the number of parameters
that need to be estimated grows exponentially, and it becomes increasingly difficult to
collect and process the data involved. This problem becomes particularly acute if we
treat every individual asset as a separate risk factor. Perhaps the best response to this
problem is to map each asset against a market risk factor along capital asset pricing
model (CAPM) lines. So, for example, instead of dealing with each of n stocks in a
stock portfolio as separate risk factors, we deal with a single stock market factor as
represented by a stock market index.
x A third reason for mapping is that it can greatly reduce the necessary computer time
to perform risk simulations. In effect, reducing a highly complex portfolio to a

extremely important to avoid overestimating the accuracy of the risk estimates by treating pseudo-data as equivalent to
real data.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 21
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

consolidated set of risk-equivalent positions in basic risk factors simplifies the


problem, allowing simulations to be done faster and with only minimal loss of
precision.

Naturally, there is a huge variety of different financial instruments, but the task of mapping them
and estimating their VaRs can be simplified tremendously by recognising that most instruments
can be decomposed into a small number of more basic, primitive instruments. Instead of trying
to map and estimate VaR for each specific type of instrument, all we need to do is break down
each instrument into its constituent building blocks – a process known as reverse engineering –
to give us an equivalent portfolio of primitive instruments, which we can then map to a limited
number of risk factors. There are four main types of basic building blocks. These are:
x spot foreign exchange positions;
x equity positions;
x zero-coupon bonds;
x futures/forward positions.

In this section we will examine the mapping challenges presented by each of these in turn, and
compute the normal analytic VaR only, that is to say, we assume the risk factors to which
positions are mapped have returns that are normally distributed. The calculation of VaR under
more realistic assumptions for risk factor return distributions is discussed in the next chapter of
the Handbook.

III.A.2.7.1 Mapping Spot Positions


The easiest of the building blocks corresponds to basic spot positions (e.g. holdings of foreign
currency instruments whose value is fixed in terms of the foreign currency). These positions are
particularly simple to handle where the currencies involved (i.e. our own and the foreign
currency) are included as core currencies in our mapping system.7

We would then already have the exchange-rate volatilities and correlations that we require for the
covariance matrix. If the value of our position is A in foreign currency units and the exchange
rate (in units of domestic currency per unit of foreign currency) is X, the value of the position in

6 There are, of course, n(n + 1)/2 relevant values in an n × n symmetric correlation matrix, but we do not have to
estimate the values on the main diagonal which are all identically equal to 1.0.
7 Where currencies are not included as core currencies, we need to proxy them by equivalents in terms of core
currencies. Typically, non-core currencies would be either minor currencies or currencies that are closely tied to some
other major currency (e.g. as the Dutch guilder was tied very closely to the German mark before introduction of the
euro in both countries). Including closely related currencies as separate core instruments would lead to major collinearity
problems; the variance–covariance matrix could fail to be positive definite, etc.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 22
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

domestic currency units – or the mapped position – is AX. If we assume A to be a credit-riskless


instrument bearing an overnight foreign interest rate, its value in units of foreign currency is
constant and the only risk to a holder with a different base currency arises from fluctuations in X.

In this situation we can calculate VaR analytically in the usual way. For example, if the exchange
rate is assumed to be normally distributed with zero mean and standard deviation ƳX over the
period concerned, then
VaR = –=ơƳX AX. (III.A.2.7)

The same approach also applies to other spot positions (e.g. in commodities), provided we have
an estimate or proxy for the spot volatilities involved.

Example III.A.2.2: Foreign exchange VaR


Suppose we have a portfolio worth $1 million, our ‘base’ currency is the pound, and £0.65 = $1.
This means that X = 0.65, A = 1 million (measured in dollars). We estimate the annual volatility
of the sterling–dollar exchange rate to be 15%. The daily standard deviation is therefore
0.15/—250 = 0.009487 and the pound values of the daily VaRs at the 95% and 99% confidence
levels are then given by

VaR1,0.05 = 1.64485 × 0.009487 × 0.65 million = £10,143,


VaR1,0.01 = 2.32635 × 0.009487 × 0.65 million = £14,346.

III.A.2.7.2 Mapping Equity Positions


The second type of primitive position is equity, and handling equity positions is slightly more
involved. Imagine we hold an amount Sk invested in the common equity shares of firm k. If we
treat every individual issue of common stock as a distinct risk factor, we can easily run into the
problem of estimating a correlation matrix whose dimensions number in the tens of thousands.
For example, if we wanted to evaluate the risk for an arbitrary equity portfolio drawn from a pool
of 10,000 companies, the number of independent elements in the correlation matrix to be
estimated would approach 50 million! It is no wonder that an alternative approach is desirable. In
fact, a workable solution to this dilemma is the central concept in the CAPM, which was covered
in detail in Chapter I.A.4.

The basic assumptions of the CAPM approach imply that the return to the equity of a specific
firm k, Rk, is related to the equity market return, Rm, by the following condition:
Rk = ơk + ƢkRm + ƥk

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 23
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

where ơk is a firm-specific constant, Ƣk is the market-specific component of firm k’s equity return
and ƥk is a firm-specific random element assumed to be uncorrelated with the market. The
variance of the firm’s return is then:
Ƴ k2 Ƣk2 Ƴ m2  Ƴ k2 , s

where Ƴ k2 is the total variance of Rk, Ƴ 2m is the variance of the market return Rm and Ƴ k2 , s is the
variance of the firm-specific random element ƥk for company k. The variance of the firm’s return
therefore consists of a market-based component Ƣk2 Ƴ m2 and a firm-specific component Ƴ k2 , s .

Assuming the firm’s equity returns are normally distributed with zero mean, the VaR of an equity
position currently valued at xk in the shares of firm k is then:
VaR Z ơ Ƴ k x k .

It is important to recognise that when we aggregate risk across many holdings in a well-
diversified portfolio, the main contributor to the total will be the market-based component Ƣk2 Ƴ m2 .
Since the specific risk of each holding is assumed to be uncorrelated both to the market return
and to all other specific risk elements, the share of total risk contributed by the specific risk terms
falls continuously as the portfolio becomes more diversified and approaches zero when the
portfolio approximates the composition of the total market.

Look again at the data requirements needed to be ready to estimate VaR based on an arbitrary
portfolio drawn from a universe of 10,000 individual stocks. Instead of almost 50 million
pairwise correlations we only need the market return volatility, 10,000 market betas for each
stock and each stock’s specific risk volatility (or more commonly every stock’s total return
volatility from which, with the market volatility and the individual stock betas, we can derive the
specific volatilities). This is 20,001 parameters in all. Compared to a full correlation approach,
the CAPM method requires a little more than 0.04% as many parameter values.

Estimating just the systematic risk of multi-asset equity portfolios using the CAPM approach
reduces to a simple mapping exercise. Assume we have N separate equities holdings with market
values of xk for k = 1, … , N. Assume the betas of these holdings are Ƣk for k = 1, … , N and
the market return volatility is Ƴm. Then the aggregate systematic VaR of the portfolio is:
n
VaR  Zơ Ƴ m ¦ Ƣk x k .
k 1

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 24
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Thus, the systematic VaR is the appropriate critical value times the market volatility times a
weighted sum of the market positions, where the weights are the respective betas for each
holding. If we define X as the total market value of the portfolio we can modify the above
equation slightly to read:

ª n º
VaR  Zơ XƳ m « ¦ (Ƣk x k / X )» . (III.A.2.8)
¬k 1 ¼

In this form, the term in square brackets is the portfolio’s net beta, or ‘effective’ market beta.

Example III.A.2.3: VaR of equity portfolio


Suppose we have a stock market portfolio with five stocks, labelled A, B, C, D and E. The value
of the portfolio is $1 million (so X = $1 million) and we have equal investments in each stock (so
xA/X = xB/X = … = xE/X = 0.20). We are interested in a daily holding period, and the daily
stock market standard deviation (Ƴm) is estimated to be 0.025 (hence the annual volatility of the
market is approximately 39.5%). The stock market betas are ƢA = 0.9, ƢB = 0.7, ƢC = 0.5, ƢD =
0.3, and ƢE = 0.1. Substituting the relevant values into equation (III.A.2.8), the VaR at the 95%
confidence level is equal to
ª n º
VaR  Zơ XƳ m 0.2 « ¦ Ƣk » =1.64485u$1,000,000u0.025u0.2u[0.9+0.7+0.5+0.3+0.1] = $20,561.
¬k 1 ¼

There is no covariance matrix in the example above because all equities are mapped to the same
single risk factor, namely, the market index, and this is highly convenient when dealing with
equity portfolios. However, this single-factor mapping will underestimate VaR if we hold a
relatively undiversified portfolio because it ignores the firm-specific risk, but the underestimation
will often be fairly small unless the portfolio is very undiversified. We should also keep in mind
that because it assumes a single dominating risk factor it can be unreliable when dealing with
portfolios with multiple underlying risk factors (e.g. when there are significant industry
concentrations within an equity portfolio).

III.A.2.7.3 Mapping Zero-Coupon Bonds


The third type of primitive instrument is a zero-coupon bond (often referred to simply as a ‘zero’
for short). We will assume for convenience that we are dealing with instruments that have no
default risk. In this context it is standard practice to represent prevailing market conditions in
terms of a continuously compounded zero-coupon interest-rate curve (sometimes also known as

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 25
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

a spot rate curve) across a selected set of future maturity dates. A simplified example of such a
curve appears in Table III.A.2.3.8

Table III.A.2.3
Continuously Compounded Zero Coupon Discount Rates

Tenor Today 3-Mos 1 Year 2 Year 3 Year 5 Year 7 Year


3-Mos 4.5000% ʊʊʊʊ>
1 Year 5.0000% ʊʊʊʊʊ ʊʊʊʊ>
2 Year 6.0000% ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊ>
3 Year 6.5000% ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊ>
5 Year 7.5000% ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊ>
7 Year 8.0000% ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊʊ ʊʊʊʊ>

In the context of Table III.A.2.3, we can have fixed cash flows maturing on any day out to seven
years. Now consider the implications of this for mapping zero-coupon cash flows on arbitrary
future dates to a set of fixed grid dates. Given the number of days to the defined grid dates at 3
months, 1, 2, 3, 5, and 7 years, we will interpolate linearly to obtain the effective zero rate on any
arbitrary date within this grid.9 We now want to allocate (or map) a cash flow maturing on an
intermediate date to the fixed grid dates in such a way that, to the maximum extent possible, the
latter have the same risk characteristics as the original cash flow. What criterion will impose this
effective risk equivalence?

A common approach is to require the present value impact of a one basis point (0.01%) change
in the zero rate at the two surrounding grid points to be the same for the allocated cash flows as
it was for the original cash flow. An example will help to illustrate how we can achieve this.

Example III.A.2.4: Mapping cash flows for a zero-coupon bond


Again using the simplified zero curve shown above, assume we have a risk-free cash flow of
1,000,000 maturing at 2.75 years. We want to create two cash flows, one at two years and one at
three years, that have risk-equivalent characteristics to the one cash flow maturing at 2.75 years.

8 In practice, of course, we would have many more grid points at more frequent intervals than is shown in this
simplified illustration.
9 In practice, in developed industrial countries, interest-rate futures are used to calibrate this curve to the market. We

would use an overnight one-day rate as the basis to interpolate out to the next future roll date, which would be no
more than three months in the future. It also should be noted that the practice of linear interpolation is not universally
preferred because it implies there will be discontinuous changes in the slope of the zero curve at some grid points if the
term structure is not uniformly linear over all horizons. To avoid this, some more complex form of interpolation such
as cubic splines may be substituted. However, we will assume linear interpolation here to keep the example fairly
simple.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 26
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

We begin by recognising that the current interpolated zero rate for 2.75 years is:
r2.75 = r2 u (3 – 2.75) + r3 u (2.75 – 2) = 0.0600 u 0.25 + 0.0650 u 0.75 = 0.06375.

An increase of one basis point in the two-year rate would result in the 2.75-year rate rising to
0.0601 u 0.25 + 0.0650 u 0.75 = 0.063775.

Similarly, an increase of one basis point in the three-year rate would result in the 2.75-year rate
rising to
0.600 u 0.25 + 0.0651 u 0.75 = 0.063825.

The ‘PV01’ (also called the present value of a basis point or PVBP) change in the two-year and
three-year rates respectively will be the difference in the present value of the 1,000,000 cash flow
after versus before these changes. Thus:
PV012 = 1,000,000(e–2.75 × 0.063775 – e–2.75 × 0.063750) = 839,137.04 – 839,194.73 = –57.69,
PV013 = 1,000,000(e–2.75 × 0.063825 – e–2.75 × 0.063750) = 839,021.67 – 839,194.73 = –173.06.

The next step is to find cash flows at two years and three years that have equal PV01 values. To
do so, we want to solve the following two equations:
C2 u e–2 × 0.0601 – C2 u e–2 × 0.0600 = –57.69
and
C3 u e–3 × 0.0651 – C3 u e–3 × 0.0650 = –173.06.

We then rearrange slightly to get:


C2 = –57.69/(e–2 × 0.0601 – e–2 × 0.0600) = –57.69/–0.000377366 = 325,258.99
and
C3 = –173.06/(e–3 × 0.0651 – e–3 × 0.0650) = –173.06/–0.000246813 = 701,177.56.

Thus, the risk sensitivity to changes in the two-year and three-year zero rates of 1,000,000 at 2.75
years is the same as that of two cash flows of 325,258.99 at 2 years and 701,177.56 at 3 years. Put
differently, the ‘original’ cash flow of 1,000,000 at 2.75 years ‘maps’ to these latter two cash flows,
which can therefore be considered its mapped equivalent.10

10 Obviously this approach to mapping cash flows can be applied in the context of more complex methods for
interpolation of the zero curve. In such cases the mechanics become more complicated, but the core principle of
equivalent value sensitivity to small movements in the points that define the zero curve remains the same.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 27
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Example III.A.2.5: VaR of zero-coupon bond


Now suppose we wish to estimate the VaR of a mapped zero-coupon bond. To be more specific,
we have a zero-coupon bond with 10 months to maturity, and the nearest reference horizons in
our mapping system are three months and 1 year. This means that we need to map our single 10
months’ horizon cash flow into (nearly) ‘equivalent’ cash flows at horizons of 3 and 12 months.

Now refer to the workbook, VaR of a Mapped Zero Coupon Bond. Given 3-month and 12-
month zero rates of 4.5% and 5%, we see that the interpolated 10-month zero rate is just under
4.9%. We then carry out the ‘PV01’ cash flow mapping and find that a 10-month zero with face
value $1m maps to (nearly) ‘equivalent cash flows of $719,217 at 3 months and $654,189 at 12
months.

The second sheet of the book then performs the VaR calculation. Assume analysis of the
historical data indicates estimated daily volatilities of 1.25% for the three-month rate and 1.00%
for the 12-month rate. For the three-month rate this translates into an absolute volatility of
0.0125 × 4.5% = 0.0563% or 5.63 basis points as a one standard deviation daily change. For the
12-month rate it translates into an absolute volatility of 0.01 × 5.0% = 0.05% or 5.0 basis points
as a one standard deviation daily change. Further assume an estimated correlation between the
three-month and 12-month returns of 0.85. The previous worksheet showed that the cash flow
mapped to the three-month maturity has a value sensitivity of $17.78 to a one basis point change
in the three-month rate, and that the cash flow mapped to the 12-month maturity has a value
sensitivity of $62.23 to a one basis point change in the 12-month rate. Based on this information
about the value sensitivity to rate changes, the volatility of the rates and their correlation, the
portfolio of mapped cash flows has a standard deviation of $399.62.11 Hence the VaR at the 95%
confidence level (taking Z0.05 = 1.645) is 1.645 u 399.62 = $657.31.

We can confirm this result by deriving the VaR directly from the standard deviation of the 10-
month zero rate based on the standard deviations of the reference rates, their correlation and
their relationship to the 10-month rate. The derived one-day standard deviation of the 10-
month zero rate equals 4.995 basis points. Since the impact of a one basis point change on the
value of the bond is $80.003, the resulting VaR is 1.645 u 4.995 u 80.003 = 657.31.

III.A.2.7.4 Mapping Forward/Futures Positions


The fourth building block is a forward/futures position. As explained in Chapter I.B.3, a forward
contract is an agreement to buy a particular commodity or asset at a specified future date at a

11 Using the standard formula for the variance of a portfolio (see Section I.A.2.3.2 and/or Section II.D.2.1), the
variance is 399.622 = 159,696 = (17.779 u 5.625)2 + (62.2253 u 5.000)2 + 2 u 0.85 u 17.779 u 5.625 u 62.2253 u 5.000.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 28
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

price agreed now, with the price being paid when the commodity/asset is delivered, and a futures
contract is a standardised forward contract traded on an organised exchange. There are a number
of differences between futures and forward contracts, but for our purposes here these differences
are seldom important and we can treat the two contracts together and speak of a generic
forward/futures contract.

To illustrate what is involved for VaR computation, suppose we have a forward/futures position
that gives us a daily return that is dependent on the movement of the end-of-day forward/futures
price. If we have x contracts each worth F, the value of our position is xF. If the return on each

futures contract F is normal with standard deviation Ƴ F and zero mean return, the VaR of our
position is
VaR Z ơ Ƴ F xF . (III.A.2.9)

The main problem in practice is to obtain an estimate of the standard deviation Ƴ F for the
horizon involved. Typically, we would have estimates for various horizons, and we would use
some interpolation method to obtain estimates of interim standard deviations. Section I.C.7.5
presents some examples of VaR calculations for portfolios of commodity futures.

III.A.2.7.5 Mapping Complex Positions


Having set out our building blocks, we can now map more complex positions by producing
‘synthetic equivalents’ for them in terms of positions in our primitive building blocks.

(i) Coupon-paying bonds. We can map coupon bonds by regarding them as portfolios of zero-
coupon cash flows and mapping each individual cash flow separately to its surrounding grid
points. Of course, any one grid point can receive mapped cash flows from one or more actual
cash-flow dates on either side of it. These multiple mapped cash flows, and more importantly
their associated sensitivities, are aggregated into a single mapped cash flow and associated
sensitivity for each grid point. The VaR of our coupon bond is then based on the VaR of its
mapped equivalent in zero-coupon cash flows at the fixed grid points.

Example III.A.2.6: VaR of Coupon Bond


Suppose we have a coupon bond with an original maturity of 4 years and a remaining maturity of
3 years and 10 months. The workbook entitled VaR of a Mapped Coupon Bond illustrates the
mapping process and associated VaR calculation. In this example we assume reference points for
defining the yield curve at 3 months, 1, 2, 3, and 4 years. As with the zero-coupon bond, the
observed daily volatilities for each rate are scaled by the rate levels to derive an absolute daily

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 29
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

standard deviation of the rate in basis points. We also assume a set of pairwise correlations of
the daily changes in the rates. The correlations used in the example are arbitrary, but do reflect
the tendency for changes in rates of similar maturities to be more highly correlated than those
where maturities differ by a greater amount. In addition, pairs of rates with similar differences in
maturities tend to be more highly correlated for rates at longer than at shorter maturities.

The worksheet then applies the standard matrix formula (II.D.4) for deriving aggregate volatility
to estimate the VaR. Alternately, the mapped cash flows could be treated as the effective
positions and analysed by either the Monte Carlo or historical simulation approach.

(ii) Forward-rate agreements. A forward-rate agreement (FRA) is an agreement to pay an agreed fixed
rate of interest for a specific period starting at a known future date. It is equivalent to a portfolio
that is long in a zero-coupon bond maturing at the end of the forward period and short in a zero-
coupon bond maturing at the beginning of the forward period. We can therefore map an FRA
and estimate its VaR by treating it as a long position in one zero-coupon bond combined with a
short position in another zero-coupon bond with a shorter maturity.

Here there is one important distinction between FRAs and interest-rate futures. This is that
interest-rate futures are settled at the beginning of the forward period, based on the discounted
present value of the floating payment determined at that point. FRAs are settled at the end of
the forward period at the undiscounted amount of the floating payment that was fixed at the
beginning of the period. This means that there is some continuing market risk for an FRA up to
the end of its forward period, whereas an interest-rate future is settled at the beginning of the
forward period (adjusted for a short settlement delay) and thereafter has no impact on market
risk.

(iii) Floating-rate instruments. Since a floating-rate note (FRN) reprices to par with every coupon
payment, we can think of it as equivalent to a zero-coupon bond whose maturity is equal to the
period until the next coupon payment. We can therefore map a floating-rate instrument by
treating it as equivalent to a zero-coupon bond that pays its principal plus the current period
interest amount on the next coupon date.

Example III.A.2.7: VaR of floating-rate note


The analysis of an FRN is very similar to that of a zero-coupon bond. The key point here is to
appreciate that fixed-income theory tells us that we can price an FRN by treating it as a zero that
pays the FRN’s current coupon and principal at the FRN’s next coupon date. Referring to the
workbook VaR of Mapped Floating Rate Note, we see that our FRN has a current coupon rate

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 30
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

of 5.5%. Given a face value of $1,000,000, this means that we can treat it as a zero that pays
$1,027,500 at its FRN coupon date. Given that date is four months hence, this implies that our
FRN maps to cash flows of $1,212,992 in three months and $39,406 in 12 months. Given earlier
assumptions about rate volatilities and correlations, the portfolio of mapped cash flows has a
standard deviation of $185 and therefore a VaR of 1.645 u $185 = $304.

(iv) Vanilla interest-rate swaps. A vanilla interest-rate swap (IRS) is equivalent to a portfolio that is
long a fixed-coupon bond and short a floating-rate bond, or vice versa, and we already know how
to map these instruments.

Example III.A.2.8: VaR of interest-rate swap


The workbook entitled VaR of Mapped Vanilla Interest Rate Swap illustrates how we deal with
this type of instrument. To map an IRS, we first note that for our purposes the swap can be
regarded as equivalent to the exchange of a coupon bond for an FRN: one leg of the swap has
the cash flows of a coupon bond, and the other the cash flows of an FRN. However, for
mapping purposes the FRN is equivalent to a zero, as we have just seen. Suppose, therefore, that
we are the fixed-rate receiver on a vanilla IRS, and the fixed-rate leg of the swap has the same
features as the coupon bond we have just considered (same notional principal, same term to
maturity, etc.). Similarly, the floating-rate leg has the same features as the FRN we have just
considered. Then it is ‘as if’ the cash flows from our swap are as given in Table III.A.2.4.

Table III.A.2.4: ‘As if’ cash flows from vanilla interest-rate swap
t (months) 4 10 16 22 28 34 40 46
Fixed rate leg $30K $30K $30K $30K $30K $30K $30K $1030K
Floating-rate leg –$1027.5K
Net –$997.5K $30K $30K $30K $30K $30K $30K $1030K

These are not the ‘actual’ cash flows from the swap, but for mapping purposes we can consider
them as if they were. In Table III.A.2.5 we map these ‘as if’ cash flows to obtain their (near)
equivalents at the same reference points we used for the coupon bond, namely, 3 months, 1, 2, 3,
and 4 years. (See the above referenced spreadsheet for the details of the mapping.)

Table III.A.2.5: Mapped cash flows from vanilla interest-rate swap


t (years) 0.25 1 2 3 4
Mapped Cash Flows –$1,156,000 $16,140 $59,662 $258,256 $843,749

We then derive an analytic estimate of the standard deviation of the portfolio’s daily change in
value. We do this by scaling the value of a one basis point change in each reference rate times

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 31
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

that rate’s one standard deviation change and applying the standard correlated aggregation
procedure using the assumed correlations for the daily rate changes. The VaR is then the
appropriate multiple of this standard deviation estimate. For the volatility and correlation
assumptions made, the portfolio of mapped cash flows has a standard deviation of $1719, and
therefore a VaR (at the 95% confidence level) of 1.645 u $1719 = $2827.

(v) Structured notes. These can be regarded as a combination of interest-rate swaps and
conventional floating rate notes, which we can already map.

(vi) Foreign exchange forwards. A foreign exchange forward is the equivalent of a long position in a
foreign currency zero-coupon bond and a short position in a domestic currency zero-coupon
bond, or vice versa. Thus it will be sensitive to three market variables, the domestic interest rate,
the foreign interest rate and the spot foreign exchange rate.

(vii) Commodity, equity and foreign exchange swaps. These can be broken down into some form of
forward/futures contract on the one hand, and some other forward/futures contract or bond
contract on the other.

III.A.2.7.6 Mapping Options: Delta and Delta-Gamma Approaches


The instruments just covered all have in common that their returns or P/L are linear or nearly
linear functions of the underlying risk factors. Mapping is then fairly straightforward. However,
once we have significant optionality in our positions, their values become nonlinear functions
(often highly so) of the underlying risk factors. This nonlinearity can seriously undermine the
accuracy of any standard (i.e. linear) mapping procedure. We should be wary of using linear-based
mapping systems in the presence of significant optionality.

So how do we map option positions? The usual answer is to use a first- or second-order Taylor
series approximation. We replace an option position with a surrogate position in an option’s
underlying variables and then use a first- or second-order approximation – often known as a delta
or delta-gamma approximation – to estimate the VaR of the surrogate position. Such methods
can be used to estimate the risks of any positions where the value is reasonably approximated by
a quadratic function of its underlying risk factor(s). Thus, over a moderate range of variation,
they can be applied to option positions that are nonlinear functions of an underlying cash price
and to fixed-income instruments that are nonlinear functions of a bond yield. See Sections
II.C.3.3, II.C.4 and II.D.2.3.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 32
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

Suppose we have a simple European equity call option of value c. The value of this option
depends on a variety of factors (the price of the underlying stock, the exercise price of the option,
the volatility of the underlying stock price, etc.) but suppose we ignore all factors other than the
underlying stock price, and use only the first-order Taylor series approximation of the change in
the option value: ƅc | ƤƅS where ƅc and ƅS are the changes in the option price and the stock
price respectively, and Ƥ is the option’s delta.

If we are dealing with a very short holding period (so we can take Ƥ as if it were approximately
constant over that period), the option VaR is approximated by multiplying the underlying VaR by
Ƥ. Hence, if we further assume that S is normally distributed,
VaR | ƤZ ơ ƳS (III.A.2.10)

where S is the current price and Ƴ is the standard deviation of returns over the relevant holding
period. The new parameter introduced into the calculation, the option Ƥ is also readily available
for any traded option and equation (III.A.2.10) can easily be extended to portfolios of options
using the portfolio delta (see Section II.D.2.3.1). Hence the delta approach requires minimal
additional data.

However, first-order approaches are only reliable when our portfolio is close to linear in the first
place, and such methods can be very unreliable when positions have considerable optionality or
other nonlinear features. If a first-order approximation is insufficiently accurate, we can try to
accommodate nonlinearity by taking a second-order Taylor series (or delta-gamma)
approximation:
ƣ
ƅc | ƤƅS  ( ƅS )2 .
2

Readers should refer to Section II.C.4.3 for the mathematical details of delta-gamma
approximation and Section II.D.2.3 for further examples of its application.

For a long call position, both delta and gamma are positive. In this case, the gamma term
contributes an increase in the value of the call option both when the stock price rises and when it
falls. If the stock prices rises (ƅS > 0) the gamma term implies that the call option value rises by
more than the delta-equivalent amount. If the stock price falls (ƅS < 0), the call option value
falls by less than the delta-equivalent amount. However, in the case of a short position in the call
option, both delta and gamma are negative. Now if ƅS > 0 the option value falls by more than
the delta-equivalent amount and if ƅS < 0 the option value will rise (become less negative) by a
smaller amount than is implied by the delta term.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 33
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

The bottom line is that positive gamma contributes a favourable impact to the value of a position,
whether the price of the underlying rises or falls. Conversely, negative gamma contributes an adverse
impact to the value of a position, whether the price of the underlying rises or falls. This
observation motivates the following as one possible modification to the VaR for an option
position to account for the second-order impact of the gamma term:
ƣ
VaR | ƤZơ ƳS  Z ơ ƳS .
2
(III.A.2.11)
2

Note that VaR is reduced if gamma is positive and increased if gamma is negative.

Example II.A.2.9: Option VaR (delta-gamma approximation)


Suppose we wish to estimate the VaR of a European call option. The parameters of the option
are: S (underlying price) = X (strike price) = 1, r (risk-free rate) = 5%, option maturity = half a
year, and Ƴ (annual volatility of underlying) = 25%. We wish to estimate the VaR using a
confidence level of 95% and a holding period of 10 days.

One approach is to use the delta approximation (III.A.2.10)

VaR | ƤZơ Ƴ 10/ 250S

where the sigma term is multiplied by 10/ 250 1/5 to convert the holding period from one
year (250 business days) to two weeks (10 business days.) To make use of this approximation, we
calculate the option delta using the standard formula for the Black–Scholes delta (see Table
I.A.8.2). This gives us Ƥ= 0.591. We then input this value and the values of the other parameters
into equation (III.A.2.10). This gives the delta approximation for the VaR of a call option as
0.591 u 1.645 u 0.25/5 = 0.0486.

If we wish to use a delta-gamma approximation instead, we obtain the option gamma using the
standard formula for the Black–Scholes gamma (see Table I.A.8.2). This turns out to be 2.198.
We then input the relevant parameter values into (III.A.2.11) to get
0.591 u 1.645 u 0.25/5 – (2.198/2) u [1.645 u (0.25/5)]2 = 0.0412.

Taking account of the positive gamma term therefore reduces our VaR, as we would expect.

Now suppose we wish to do the same exercises for a long position in a European put. In this
case, the option delta is –0.409, and the gamma remains the same at 2.198. Applying equation
(III.A.2.10) then gives us the delta approximation for the VaR as:

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 34
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

–(–0.409) u 1.645 u 0.25/5 = 0.0336.

The corresponding delta-gamma approximation is then found by applying equation (III.A.2.11):


–(–0.409) u 1.645 u 0.25/5 – (2.198/2) u [1.645 u (0.25/5)]2 = 0.0262.

Again, taking account of the gamma term serves to reduce our VaR.

Much the same approach can be used to give us a second-order approximation to the VaR of a
bond portfolio, with the first-order term reflecting the bond’s duration and the second-order
term reflecting its convexity.

III.A.2.8 Backtesting VaR Models


Backtesting involves after-the-fact analysis of the performance of risk estimation models and
procedures. In the case of market risk there are two distinct types of backtesting that serve
different purposes. These involve comparing ex-ante VaR estimates with ex-post values of (a)
actual P/L in the applicable periods and (b) hypothetical P/L assuming positions remained static
for the applicable period.

The first approach is the one required of banks under the market risk amendment to the Basel
Capital Accord and can be thought of as an all-in test. Control of the actual recorded P/L is what
risk systems are ultimately designed to achieve. Hence comparing VaR estimates to actual P/L
must be a part of any backtesting process. However, one problem with such a test is that there is
more than one reason why actual gains and losses may exceed the risk estimates unexpectedly
often. This may occur because of weaknesses in the VaR estimation system, including
(depending on the approach used):
x inaccurate historical market data and/or parameter estimation;
x incomplete consolidation of trading positions;
x inaccurate mapping of trades to risk-equivalent positions in a limited set of primitive
securities;
x incorrect estimation of the portfolio standard deviation or excessive nonlinearities
and/or non-normal return distributions resulting in inaccurate VaR estimates using
the analytical approach;
x the generation of Monte Carlo scenarios that fail to match the target characteristics
implicit in the estimated parameters;
x the use of inaccurate or insufficient historical time series to produce historical
simulation VaR estimates.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 35
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

It also may be caused by gains or losses from intra-day trading that are, by original design, not
reflected in any of the three main approaches to VaR estimation. When backtesting reveals
weaknesses in the VaR estimation system, it is important to be able to isolate the source of the
problem. This is where the second form of backtesting is useful, although not always practiced.

The second approach involves comparison of VaR estimates with the hypothetical P/L that
would have resulted in the applicable periods if all the trades at the beginning of each period were
simply revalued based on end-of-period market prices. This eliminates the impact of day trading
that affects the actual P/L, but one must be careful in constructing the hypothetical ex-post P/L
results. Assume, for example, that these are based on the same valuation methods used in the
VaR estimation process and that these methods are flawed because of one or more of the first
three causes noted above. Then the comparison of VaR to these ex-post hypothetical P/L
estimates may look acceptable when, in fact, they are both subject to the same flawed calculation
methods. This can result in attributing inaccurate VaR estimates to the impact of day trading
when the actual problem still lies in the estimation process itself.

When conducting the first and more traditional approach to backtesting, it is important to review
the official P/L series to establish their relevance to the exercise. Accounting systems are not
designed to maintain consistent time series except over fixed reporting periods such as a fiscal
quarter. For shorter sub-periods such as a day, which are often the periods of interest for market
risk estimation and backtesting, accounting systems attempt to maintain accurate period-to-date
information only. Thus, if there is a mistaken entry that results in a large but erroneous daily gain
or loss, this will be adjusted by a correcting entry the following day (or later) to bring the fiscal-
period-to-date figures into line. Obviously this will result in misleading daily P/L figures for both
the day the mistake was made and the day the correcting entry was booked. It is therefore
important to compile the actual P/L to be used for backtesting on a current basis. This allows
investigation and documentation of accounting errors and their appropriate treatment in the risk
system while the details are still fresh in people’s minds. Failing this, the actual P/L series may be
sufficiently flawed to make meaningful comparison impossible.

Obviously the key comparison in backtesting is whether the actual or hypothetical P/L series
exceeds the corresponding ex-ante VaR estimate (or, more generally, VaR estimates predicated
on different confidence levels) with the predicted frequency (or frequencies). In most cases,
actual P/L (adjusted for accounting errors and their subsequent correction) tends to produce a
lower than expected frequency of observations outside the VaR estimate than is consistent with
the probability used in those estimates. This appears to be because day trading allows positions to

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 36
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

be closed quickly when markets become volatile, thereby reducing actual losses compared to
holding a static portfolio for a full 24 hours.

III.A.2.9 Why Financial Markets Are Not ‘Normal’


The central limit theorem, sometimes referred to as the law of large numbers, is a fundamental statistical
insight with far-reaching consequences. It states that the distributions of sums and averages of
random variables exhibit a traditional bell curve or normal distribution even when the individual
variables are not normal. While exceptions exist, this holds true for almost any stable random
variable found in nature. This gives the normal distribution certain plausibility, and normal
distributions are in fact commonly observed in the natural sciences. Thus, it is not surprising that
in the early days of modern finance there was some serious debate over whether the distribution
of changes in market data departed from a normal distribution in a systematic way. Today the
presence of high kurtosis or ‘heavy tails’ in such distributions is a well-accepted fact. In trying to
incorporate such behaviour into market risk analysis, it is important to consider why such
persistent departures from the pervasive normal distribution should occur.

A key assumption behind the central limit theorem is that the individual observations of random
variables going into an average or sum are statistically independent. More often than not this is a
reasonably good description of the thousands, or even millions, of individual buy and sell
decisions that drive changes in demand and supply on any given day. Since the market clearing
price reflects the net balance of these largely independent decisions, it is not surprising that
changes in such prices often exhibit a roughly normal distribution. This is, however, not always
the case.

Consider an example totally unrelated to finance. Suppose you equip the passengers of a single-
deck cruise ship with a device that allows you to locate them exactly at any given moment. Then
proceed to calculate once every minute the centre of gravity of all these locations with reference
to the two-dimensional framework of the ship and plot the resulting distribution. At most times
passengers will be in a variety of locations based on their personal preferences, their energy
levels, their mood of the moment and the available alternatives. The resulting distribution of
their centre of gravity over time will be a cloud of points bunched around the centre of the
available passenger areas. We would expect it to exhibit something very close to a bivariate
normal distribution.

Now assume there is an announcement over the ship’s loudspeaker that a pod of whales is
breaching off the port bow. The consequences are fairly obvious. We would see a sudden
outlier in the distribution as passengers rush to find a good viewing spot among the limited

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 37
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

spaces available. In the immediate aftermath of the announcement, all passengers know several
things:
x There is an opportunity to see something quite unique.
x The time to see it is limited.
x There is an ideal location for viewing the phenomenon.
x Everyone else knows what they know.

It is this final point, this ‘mutual self-awareness’, that makes for the sudden mad rush to the port
bow. Each passenger reacts to the knowledge that speed is of the essence if a good viewing place
is to be secured. If the ship was nearly empty, or if only a few people were aware of the
opportunity or were likely to take advantage of it (if, say, most passengers were confined to their
cabins with sea sickness) the sense of urgency would be greatly reduced.

There is a relevant scene in the movie Rogue Trader about Nick Leeson and the Barings debacle.
He is awakened by a call at home in the early hours of the morning from another member of the
firm. The voice at the other end of the phone says urgently, ‘Turn on CNN!!’ The TV in the
bedroom flickers to life showing scenes of the Kobe earthquake. The voice at the other end of
the phone says, ‘This is just going to kill the market!!!’

In effect this is much like the announcement on the ship but on a global basis. Observers around
the world are suddenly focused on a common crystallising event with obviously directional
implications for the market. In addition, everyone knows that everyone else knows. Suddenly the
millions of decisions that drive the market are no longer randomly independent. Rather they are
subject to a common shared perception. The core structural assumptions that underpin a normal
distribution have temporarily broken down and we see a sudden extreme observation.

Various statistical methods are used to try to build such behaviour into distributions of risk factor
returns. But what these approaches cannot do is predict in advance when such events will occur.
Thoughtful consideration of such potential scenarios, especially those that present special threats
given existing open positions in the book, is therefore an essential component of effective market
risk management. Such analysis remains in the realm of experience and seasoned judgement that
no amount of advanced analytical technique can replace. This topic will be discussed in detail in
Chapter III.A.4.

III.A.2.10 Summary
In this chapter we have explained the underlying methodology for the three basic VaR model
approaches – analytic, historical simulation and Monte Carlo simulation. A main focus of this

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 38
The PRM Handbook – III.A.2 Introduction to Value at Risk Models

chapter has been the analytic VaR models that are only valid when portfolio values are linearly
related to the underlying risk factors and portfolio returns are normally distributed. We have also
considered simple analytic formulae for VaR based on delta-gamma approximation to simple
portfolios of standard European options.

However, in reality things are not that straightforward. Portfolio returns are not normally
distributed and, as is evident from Chapter I.B.9, options portfolios typically contain products
with many underlying risk factors and various exotic features. The next chapter of the Handbook
will consider how the basic VaR methodology that we have introduced here may be extended to
more realistic assumptions about the products traded and the behaviour of asset returns.

References
Boudoukh, J, Richardson, M, and Whitelaw, R (1998) ‘The best of both worlds: a hybrid
approach to calculating value at risk’, Risk, Vol. 11(5), pp. 64–67.

Hull, J, and White, A (1998) ‘Incorporating volatility updating into the historical simulation
method for value-at-risk’, Journal of Risk, Vol. 1 (Fall), pp. 5–19.

Shimko, D B, Humphreys, B, and Pant, V (1998) ‘Hysterical simulation’, Risk, Vol. 11(6), p. 47.

Copyright © 2004 K. Dowd, D. Rowe and the Professional Risk Manager’s International Association. 39
The PRM Handbook – III.A.3 Advanced VaR Models

III.A.3: Advanced Value at Risk Models


Carol Alexander and Elizabeth Sheedy1,2

III.A.3.1 Introduction
The previous chapter introduced the three basic VaR models: the analytic model and two
simulation models, one that is based on historical observations and another that uses a covariance
matrix to generate correlated scenarios by Monte Carlo (MC) simulation. The fundamental
assumptions applied in the basic forms of each model can be summarised as in Table III.A.3.1. A
number of variations on these assumptions are in common use and some of these will be
reviewed later in this chapter.

Table III.A.3.1: Basic VaR model assumptions


Analytic Historical Monte Carlo
VaR VaR VaR
Risk Factor/Asset Distributions Normal No Assumption Normal
P&L Distribution Analytic Empirical Empirical
(Normal) (Historical) (Simulated)
Requires Covariance Matrix? Yes No Yes
Risk Factor/Asset Returns i.i.d.?3 Yes Yes Yes

Clearly the analytic and the MC VaR models are very similar. Indeed, if one were to apply the
basic MC VaR model above to a linear portfolio – i.e. one without options, for which the
portfolio value is a linear function of the underlying risk factors – the VaR estimate should be
similar to the analytic VaR model estimate. If it were different, that would be because not enough
simulations were used and a ‘small sample’ error had been introduced. But of course, one would
not apply MC VaR to a linear portfolio; this method is quite computationally intensive, and
would only be applied to portfolios containing options.

The Excel spreadsheet SimpleVaR.xls examines the VaR of a simple portfolio ($1000 a point on
the Johannesburg Top40 index, actually). From the single historical price series in the spreadsheet
we compute the 1% 10-day VaR as:
x $878,233 according to the analytic VaR model,

1 Carol Alexander is Chair of Risk Management and Director of Research, ISMA Centre, Business School, University
of Reading, UK. Elizabeth Sheedy is Associate Professor, Applied Finance Centre, Macquarie University, Australia.
2 Many thanks to Kevin Dowd and David Rowe for their careful editing of this chapter.
3 i.i.d. stands for ‘independent and identically distributed’.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 1
The PRM Handbook – III.A.3 Advanced VaR Models

x $839,829 according to the historical VaR model,


x something close to $878,233 (hopefully) according to the MC VaR model.

The MC VaR model result changes every time we perform the simulation. In this spreadsheet
only 1000 simulations are used so the result can vary a lot each time we press F9 to generate new
simulations. However, if we used 100,000+ simulations, the MC VaR result would be $878,233
(or very close to that) as in the analytic VaR model.

The main problem with the basic analytic and MC VaR models is that they are both based on the
very restrictive assumptions that risk factor (or asset) returns are i.i.d. normally distributed. Most
practitioners are aware that the assumption of normality is violated in reality, with returns often
exhibiting skewness and leptokurtosis. This has led to the popularity of the historical simulation
method.

The historical simulation method, which can be applied to any type of portfolio, can give results
quite different from the other two methods, even for an extremely simple portfolio as shown
above. If the portfolio P&Ls are non-normal, then the historical VaR should be more accurate,
assuming the sample contains enough data points to estimate the 1% lower percentile of the
historical distribution with sufficient accuracy. But this requires a very large amount of historical
data, at least 1000 days.4 Even if the data were available, the portfolio price series is computed
over these 1000 or so days using the current portfolio weights, and this may not be plausible
(would you have traded the same portfolio 5 years ago, when the market was in all probability
quite different?). And even if the current weights are plausible, we still have to assume that
returns are i.i.d. and use the ‘square root of time’ rule to compute the 10-day VaR (see Section
III.A.3.2).5

The use of a covariance matrix in the analytic and the MC VaR models has both advantages and
limitations. One advantage is that the covariance matrix will not normally be based on a very
large amount of historical data. Regulators, for some reason, require at least one year of historical
data to be employed when computing regulatory capital using a VaR model (see Section
III.A.3.4), but for internal purposes less than a year of data may be used as, for instance, in
Section III.A.3.4.1. Hence the assumption of current portfolio weights is not as problematic as it is
with the historical VaR model. However, an important limitation is that the use of a single
covariance matrix assumes that all co-variations between risk factors are linear; but they are not

4Regulators recommend 3–5 years of daily data, i.e. 750–1250 observations..


5You cannot use 10-day returns instead, since these would have to be non-overlapping, and one would need a history
spanning decades to obtain enough data!

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 2
The PRM Handbook – III.A.3 Advanced VaR Models

linear. It is well known that the extreme variations in major risk factors can be more highly
correlated than the ‘normal’ variations. We shall address this issue in Section III.A.3.5.3.

Thus, all three of the approaches are defective in different ways when the basic assumptions are
used. The nub of the problem is this: how can VaR be estimated using assumptions that are
consistent with the stylised facts we observe in financial markets? Most of this chapter is directed
at this problem, which is fundamentally concerned with model risk. Section III.A.3.2 examines
the standard distributional assumptions more closely and the ways in which they are violated. It
concludes, somewhat counter-intuitively, that the most pressing problem for those modelling
market risks is not heavy tails, but volatility clustering. Accordingly, Section III.A.3.3 examines
two different approaches to modelling volatility clustering: EWMA and GARCH. These models
are then applied to the problem of VaR estimation in Section III.A.3.4. Section III.A.3.5
examines some other solutions to the problem of heavy tails in VaR estimation: Student’s t, EVT
and normal mixtures. The remaining sections of the chapter address some other advanced topics
in VaR modelling. It is often desirable to decompose VaR into components for purposes of limit
setting and performance measurement. These decomposition techniques are considered in
Section III.A.3.6. Section III.A.3.7 examines principal component analysis, an important tool for
analysing bond and futures portfolios. Section III.A.3.8 concludes the chapter.

III.A.3.2 Standard Distributional Assumptions


When measuring risk, for example in a VaR calculation, it is often necessary to make some
assumption about the distribution of portfolio returns. The most widespread choice is the
assumption that returns are i.i.d. normal. That is, each return is an independent realisation from
the same normal distribution (see Section II.E.4.4). Normality implies that the distribution can be
completely described with only two parameters: the mean and the variance.

Yet many financial analysts are sceptical about the assumption of normality. The skewness and
(excess) kurtosis should equal zero, but this is rarely the case. To illustrate this point, let us
consider daily returns for the USD/JPY for the period from January 1996 to July 2004 as
illustrated in Figure III.A.3.1 and described in Table III.A.3.2.

Table III.A.3.2 highlights the stylised facts or characteristics commonly observed in financial
returns:
x The mean of the daily returns is very close to zero.
x Risk is expressed in two ways – standard deviation per day and volatility, which is the
annualised standard deviation. If we assume returns are i.i.d. then the square root of time
rule applies. This rule is explained in Section II.B.5.3. It states that the standard deviation

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 3
The PRM Handbook – III.A.3 Advanced VaR Models

of h-day returns is —h u standard deviation of one-day returns;6 or equivalently, that


volatility is constant.7 Note that throughout this Handbook we adopt the standard
convention of quoting volatility on an annual basis.
x The skewness (see Section II.B.5.5) is estimated using the Excel function ‘SKEW’ and is
found to be negative.
x The excess kurtosis (see Section II.B.5.6) is estimated using the Excel function ‘KURT’
and is found to be positive, indicating that the distribution has heavy tails relative to the
normal.

Figure III.A.3.1: USD/JPY returns, Jan. 1996 to July 2004


0.04

0.02

-0.02

-0.04

-0.06

-0.08
Jan-96
May-96

Jan-97
May-97

Jan-98
May-98

Jan-99
May-99

Jan-00
May-00

Jan-01
May-01

Jan-02
May-02

Jan-03
May-03

Jan-04
May-04
Sep-96

Sep-97

Sep-98

Sep-99

Sep-00

Sep-01

Sep-02

Sep-03

The implication of negative skewness and positive excess kurtosis is that the true probability of a
large negative return is greater than that predicted under the normal distribution. This finding
has potentially grave consequences for the measurement of VaR at high confidence levels. If
VaR is calculated under the assumption of normality, yet the actual data have heavy tails, then
VaR will understate the true risk of a disastrous outcome. Consequently, capital reserves may be
an insufficient buffer to withstand disaster at the desired confidence level.

6 To be precise, the rule is based on log returns, defined as ln(Pt+h/Pt) where Pt is the price at time t. Log returns have
the nice property that the sum of h consecutive one-day log returns is the h-day log return. Also, if h is small, it is easy
to show that log returns are very close indeed to the ‘absolute’ return (Pt+h Pt )/Pt.
7 To see this, let Ƴ denote the standard deviation of one-day returns. Then the volatility based on 1-day returns =

Ƴ¥250Under the square root of time rule, the standard deviation of h-day returns is Ƴ¥h, and since there are 250/h
time periods of length h days per annum, the volatility based on h-day returns = Ƴ¥hu¥(250/h) which also equals
Ƴ¥250So in Table III.A.3.2, volatility = 0.00718 u ¥250 = 0.1135.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 4
The PRM Handbook – III.A.3 Advanced VaR Models

Table III.A.3.2 Analysis of USD/JPY returns, Jan. 1996 to July 2004


Number of observations 2161
Mean return 2.60047 u10 5
Standard deviation per day 0.718%
Volatility 11.35%
Skewness –0.8255
Excess kurtosis 6.241
No. of observations below the lower 48 vs. 22 expected
bound of 99% confidence interval

Under the normal distribution, the lower bound of the 99% one-tailed confidence interval is
defined by the mean less 2.33 standard deviations. In this case the lower bound equals – 2.33 ×
0.00718 = –0.0167, so we should expect that only 1% of the return observations will be lower
than this figure. With 2161 observations, only 22 returns (1% of 2161) should lie in this lower
tail. In fact, examination of these data reveals that 48 returns are below the lower bound.

This kind of analysis is often performed to demonstrate that finance data violate the assumption
of normality. The problem with this analysis, however, is that it incorrectly assumes that
volatility has remained constant for the entire sample period of 8.5 years! Reviewing Figure
III.A.3.1, we can see that the volatility of USD/JPY returns has varied considerably, with the
period of greatest volatility being 1998. This was a time when most financial markets exhibited
high risk following the Russian crisis. In 1998 we see that oscillations about the mean were much
greater than at other times. High-volatility periods are characterised by large returns, both
positive and negative. Note that it is the absolute size of the return that is important rather than
direction. On one day at the height of the Russian crisis, the US dollar depreciated by more than
6% against the Japanese yen. The largest positive return (in excess of 3%) occurred during the
same period of high volatility.

Of the 48 observations in the lower tail of the confidence interval, 18 fall in 1998, a period of
very high volatility. This leads us to an alternative way of understanding the data: the large
number of observations that appear in the lower tail is a result of changes in volatility throughout
the sample period. Sometimes, as in 1998, volatility is much higher and so the confidence
interval is correspondingly wider. Instead of a lower bound of –0.0167, the lower bound would
be very much lower (for example, –0.0335 if volatility were to double). If we take account of the
increase in volatility it is likely we will find that the number of observations in the tail is close to
1% as expected. In other words, the main problem with financial data is not skewness/kurtosis
(often referred to as heavy tails), but the fact that volatility is changing over time. In short, if we

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 5
The PRM Handbook – III.A.3 Advanced VaR Models

take account of these changes in volatility then the assumption of normality may actually be quite
a reasonable one.

This brings us to a further discussion of ‘i.i.d.’ –returns.8 If returns are ‘identically distributed’,
then the parameters of the distribution (mean and variance) should be constant over time. This
assumption is clearly violated since the variance parameter is changing. If returns are
independent, then yesterday’s return will have no bearing on today’s return. Tossing coins is a
good example of independent outcomes. Even if I toss 10 heads in a row, the probability of
heads on the next toss is still 50% (assuming a fair coin!). In other words, the next toss is
unaffected by what has gone before.

However, the empirical analysis of financial returns shows that the assumption of independence
is unsupported: the size (but not the direction) of yesterday’s return does have implications for
today’s return. A large return yesterday (in either direction) is likely to be followed by another
large return in either direction. This concept is commonly referred to as ‘the heat wave effect’ or
‘volatility clustering’. We can test for volatility clustering by examining the autocorrelation in
squared returns. Squared returns are used because squaring removes the sign of the return – we
can focus purely on its magnitude rather than its direction. Financial data often generally exhibit
significant positive autocorrelation in squared returns, in which case the data are not i.i.d.

Volatility clustering is arguably the most important empirical characteristic of financial data. The
most useful financial models take account of this fact. Indeed, the world of finance research was
for ever changed in the 1980s when volatility clustering was first identified by Robert Engle and
his associates. In 2003 Robert Engle won (jointly) the Nobel Prize for Economics, largely
because of his groundbreaking contribution in this area.

To summarise this section, we can say that traditional financial models have often assumed that
returns are i.i.d. normal. In reality we observe changes in volatility and volatility clustering. We
will show in the following sections that these characteristics can be modelled successfully.
Having taken account of volatility clustering, the issue of heavy tails becomes less significant.

8 See Section II.B.5.3 for further discussion of this concept.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 6
The PRM Handbook – III.A.3 Advanced VaR Models

III.A.3.3 Models of Volatility Clustering


If we believe that today’s volatility is positively correlated with yesterday’s volatility, then it is
appropriate to estimate conditional volatility, that is, volatility that is conditional on the recent past.
We will discuss two methods for achieving this: exponentially weighted moving average
(EWMA) and generalised autoregressive conditional heteroscedasticity (GARCH). The latter is
more difficult to implement but offers some potential advantages. The application of these
methods to VaR models will be discussed in Section III.A.3.4.

III.A.3.3.1 Exponentially Weighted Moving Average


The EWMA method for estimating volatility was popularised by the RiskMetrics Group.
Today’s estimate of variance is:
Ƴ̂ t2 1  ƫ rt21  ƫƳˆ t21 , (III.A.3.1)

where ƫ is a ‘smoothing constant’ and rt–1 is the most recent day’s return.9 Notice that since ƫ is
positive, today’s variance will be positively correlated with yesterday’s variance, so we see that
EWMA captures the idea of volatility clustering. The parameter, ƫ, may also be referred to as the
‘persistence’ parameter. The higher the value of ƫ, the more will high variance tend to persist
after a market shock. The EWMA variance also reacts immediately to market shocks. If
yesterday’s return is large, in either direction, the variance will increase through the first term on
the right-hand side of (III.A.3.1). The greater is 1 – ƫ, the greater will be the size of the reaction
to a return shock.

In Section III.A.3.3.2 we shall see that this type of reaction to, and persistence following, a
market shock is the characteristic of a GARCH model. In fact, the EWMA can be thought of as
a very simple GARCH model. However, whilst GARCH volatility models are based on firm
statistical foundations, EWMA volatility models are not, for the following reasons:
x There is no proper statistical estimation procedure for the smoothing constant: the
user simply assumes some value for ƫ. According to RiskMetrics, a value for ƫ of
around 0.94 is generally appropriate when analysing daily data.10

9 This can be calculated in Excel by typing (III.A.3.1) in the formula bar, or by using the exponential smoothing
analysis tool. Note that technically, in (III.A.3.1) the volatility estimate depends on the entire historical data set rather
than a limited past history. A pragmatic alternative calculation that is much easier to replicate for the auditors is to
truncate the historical sample at n past observations, where n is defined as the point at which ƫn drops below some
critical value C. For instance, if ƫ = 0.94 and C = 0.002 then n = 100 observations. The EWMA on n observations can
be written as a finite sum:
ª n º ª n tº
Ƴ̂ t2 « ¦ 1  ƫ t r2t » / « ¦ 1  ƫ »
¬t 1 ¼ ¬t 1 ¼
10 See Section 5.3.2 of RiskMetrics Technical Document, 1996, available at: www.riskmetrics.com/techdoc.html#rmtd

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 7
The PRM Handbook – III.A.3 Advanced VaR Models

x In portfolio models, where the variance is calculated using a covariance matrix, some
portfolios can have negative variance unless this matrix is positive semi-definite (see
Section II.D.3.2). For this reason one cannot form an EWMA covariance matrix using
different values for ƫ for different assets: in fact the same value of ƫ must be used for
all variances and covariance in the matrix. More details are given in Section III.A.3.4.1.
x The EWMA variance estimate is converted to volatility by taking the square root, to
obtain the standard deviation, and then applying the square root of time rule:

EWMA volatility estimate at time t for a horizon of h days = Ƴˆ t h u 250/ h .


But, as explained in Section III.A.3.2, this is equivalent to a constant volatility
assumption. Hence the EWMA model is not really appropriate for estimating the
market evolution over time horizons longer than a few days.

Figure III.A.3.2: EWMA volatility – USD/JPY returns, Jan. 1996 to July 2004

40%

35%

30%
Volatility % pa

25%

20%

15%

10%

5%

0%
Feb-96

Aug-96

Feb-97

Aug-97

Feb-98

Aug-98

Feb-99

Aug-99

Feb-00

Aug-00

Feb-01

Aug-01

Feb-02

Aug-02

Feb-03

Aug-03

Feb-04

Figure III.A.3.2 highlights the variability of conditional volatility. Here the EWMA model has
been applied with ƫ = 0.94. Recall from the previous section that unconditional volatility for this
same time period was calculated as 11.35% pa. Following the market shocks of 1998, conditional
volatility reaches a maximum of 35%. In the quieter periods of 1996 and early 2004, conditional
volatility briefly dips below 5% pa.

Having estimated conditional volatility in this way, we can then standardise the daily returns.
That is, each daily return is divided by the relevant standard deviation. We do this to try to make

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 8
The PRM Handbook – III.A.3 Advanced VaR Models

the returns comparable. Prior to standardisation, each return comes from a distribution having a
different volatility and is therefore not strictly comparable. We then repeat the analysis of Table
III.A.3.2, but this time using the standardised returns (see Table III.A.3.3).

Table III.A.3.3 Analysis of standardised USD/JPY returns, (Feb. 1996 to July 2004
Number of observations 2140
Skewness –0.1974
Excess kurtosis 0.2727
No. of observations below the lower 29 vs. 21 expected
bound of 99% confidence interval

Having standardised returns on conditional volatility, we find that the problems noted earlier are
much diminished. That is, skewness and excess kurtosis are now much closer to zero. The
number of observations in the lower tail is much closer to that expected under normality. This
suggests that the issue of heavy tails, often noted by finance analysts, can be partly explained by
volatility clustering.

III.A.3.3.2 Generalised Autoregressive Conditional Heteroscedasticity Models


GARCH models are similar to EWMA in that both focus on the issue of volatility clustering.
The word heteroscedasticity is Greek for ‘different scale’, so GARCH models are concerned with
the process by which the scale of returns, or volatility, is changing. GARCH models are
‘generalised’ in the sense that they can be varied, almost infinitely, to take account of the factors
specific to a particular market. What all GARCH models share, however, is a positive correlation
between risk yesterday and risk today; that is, an ‘autoregressive’ structure in risk.

The simplest GARCH model consists of two equations that are estimated together. The first is
the conditional mean equation and the second is for the conditional variance:
rt c  ƥt
Ƴ ƹ  ơƥt21  ƢƳ t21
2
t (III.A.3.2)
ƹ ! 0, ơ,Ƣ t 0.

The parameters ƹ, ơ and Ƣ of the GARCH model are usually estimated using maximum
likelihood estimation (MLE). This involves using an iterative method to find the maximum values
of the likelihood function. In normal GARCH models, i.e. when the distribution of ƥt conditional
on all information up to time t is normal with variance Ƴ t2 , the likelihood function is a
multivariate normal distribution on the model parameters (see Section II.E.4.7).

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 9
The PRM Handbook – III.A.3 Advanced VaR Models

In its simplest form (shown here) the conditional mean equation merely adjusts for the mean (c),
leaving an ‘unexpected’ return ƥt. The conditional variance equation appears very similar to the
EWMA equation where Ƣ stands in place of ƫ, ơ stands in place of 1 – ƫ, and an extra constant
term (ƹ) is also included. Perhaps the most important difference between EWMA and GARCH
is the fact that in GARCH there is no constraint that the sum of the coefficients (ơ + Ƣ) should
equal one. It is possible that data are best explained when the estimated coefficients do add to
one,11 but in this case the GARCH forecasts behave just like those from a constant volatility
model. If the sum ơ + Ƣ is less than one (the more usual case) then volatility is said to be mean-
reverting and the rate of mean reversion is inversely related to this sum. This means that the
variance will, in the absence of a market shock, tend towards its steady-state variance defined by
ƹ
Ƴ2 . (III.A.3.3)
1 ơ Ƣ

The GARCH volatility forecasts then behave like the volatility term structures we observe in
implied volatilities, where implied volatilities of long-term options do not vary as much as the
implied volatilities of short-term options. From (III.A.3.3) we can see that if ơ  Ƣ 1 (as in
EWMA) then the denominator equals zero so the steady-state variance is undefined. In this case
variance is not mean-reverting and is assumed constant as we project forwards in time.

Example III.A.3.1: GARCH model for spot USD/JPY


We estimate a GARCH(1,1) model for daily log returns from January 1996 to July 2004. The
conditional variance equation is estimated using a maximum likelihood technique:
Ƴ t2 3.78 u 10 7  0.03684ƥt21  0.95571Ƴ t21 .

The constant term ( 3.78 u 10 7 ) is statistically significant but very small. Conditional variance for
the USD/JPY is quite persistent (with a persistence coefficient of 0.955710) and not particularly
reactive (reaction coefficient of 0.03684) compared with some other markets. This means that
the initial reaction to new information will be more muted, but its effect relatively long-lasting.
As the sum of these two coefficients is less than unity, we can say that conditional variance is
mean-reverting to a steady-state variance defined by (III.A.3.3). Substituting the estimated
parameters into this equation, the steady-state variance is 0.0000505794, equivalent to an
annualised volatility of 11.24%. This is close to the sample volatility of 11.35% p.a. in Table
III.A.3.2.

11 When this happens, the model is called an integrated GARCH (IGARCH) model. The IGARCH is most commonly

used for modelling currency returns.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 10
The PRM Handbook – III.A.3 Advanced VaR Models

Figure III.A.3.3: GARCH vs. EWMA volatility – USD/JPY returns

40%

35% EWMA
GARCH
30%
Volatility % pa

25%

20%

15%

10%

5%

0%
Feb-96
Aug-96
Feb-97
Aug-97

Feb-98
Aug-98

Feb-99
Aug-99

Feb-00
Aug-00

Feb-01
Aug-01

Feb-02
Aug-02

Feb-03
Aug-03
Feb-04
Figure III.A.3.3 shows the conditional volatility resulting from the GARCH model. The two
series, EWMA and GARCH, track each other closely, so the figure emphasises the similarity
between the two approaches.

We have argued that volatility clustering is the main cause of the apparent heavy tails observed in
financial data. If GARCH modelling is successful, then it should account for these heavy tails,
leaving residuals that are i.i.d. normal. Is this in fact the case? It can be tested by examining the
series of standardised returns. To obtain the standardised returns we estimate a GARCH model
and from this create a series of daily conditional standard deviation estimates. We then divide
each daily return by the relevant conditional standard deviation to obtain a standardised return,
which we then test for normality. If this series is normally distributed then we can conclude that
volatility clustering fully explains the extreme moves. If not, then further action may be
necessary to adjust for the extreme moves (or for the heavy tails of the distribution). For
example, it may be necessary to use a more complex GARCH specification. For instance, we
might use an asymmetric specification for the conditional variance that can account for a
‘leverage effect’ in equities, and/or a conditional distribution for ƥt that is non-normal. There is a
huge body of research literature on GARCH models, but this is beyond the scope of the PRM
exam.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 11
The PRM Handbook – III.A.3 Advanced VaR Models

III.A.3.4 Volatility Clustering and VaR


Volatility clustering has important implications for VaR. Consider the situation in which new
and unexpected information comes to the market, as it did when Russia defaulted on its debt in
1998, causing a large reaction in market prices. Our knowledge of volatility clustering tells us that
this market shock is likely to be followed by large returns (in either direction) for some time.
Ideally, our VaR measure will increase significantly, sending the appropriate signal to risk
managers either to reduce risk through hedging or to ensure that capital is adequate to withstand
the higher risk environment.

Figure III.A.3.4: USD/JPY volatility in 1998

35%

EWMA
30% 1-year Volatility

25%

20%

15%

10%
Jan-98

Mar-98

May-98
Feb-98

Sep-98

Oct-98
Apr-98

Jul-98

Aug-98

Nov-98

Dec-98
Jun-98

Figure III.A.3.4 compares two volatility measures for USD/JPY returns for calendar year 1998.
The EWMA measure of conditional volatility reacts swiftly as the crisis unfolds in mid-1998.
The other series shows unconditional volatility calculated using rolling one-year samples. Since
each day in the sample is equally weighted, and has a small weight of only 1/250, this measure is
very slow to react to market shocks.

The choice of volatility measure will have major implications for the VaR measure and
consequently for capital adequacy. By failing properly to take account of volatility clustering, the
risk is that financial institutions will take unduly large risks (or will hold insufficient capital) in
periods of market crisis. In addition, they will hold too much expensive capital at other times.

Backtests of such models will reveal clusters of exceptions where the actual losses exceed VaR.
Unfortunately, the timing of exceptions is often ignored in traditional backtesting procedures
which focus only on the number of exceptions (see Section III.A.2.8). Both regulators and risk

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 12
The PRM Handbook – III.A.3 Advanced VaR Models

managers are concerned about the timing and size of exceptions. A series of large losses in quick
succession is potentially far more serious for solvency than smaller losses spread over time, even
if the total combined losses are equal.

For reasons that remain unclear to the authors, the Basel regulations relating to market risk
currently require financial institutions to measure volatility using at least one year of data. This
regulation encourages the use of volatility measures that react very slowly to new information
such as the one illustrated above. In our view, this is exactly the wrong way to proceed.12

III.A.3.4.1 VaR using EWMA


Volatility clustering can be relatively easily incorporated into VaR measures using the EWMA
approach. There are at least three ways that this can be done:
x Historical simulation using volatility weighted data. This method is explained in detail in
Section III.A.2.6.2. Historical returns are standardised using conditional volatility
estimates calculated using EWMA. This approach has a number of attractions, especially
for option-affected portfolios. Historical simulation makes no assumption about the
distribution of historical returns apart from independence. Thus it is attractive if it is
feared that the standardisation process has not entirely eliminated the heavy tails evident
in raw returns.
x MC simulation using EWMA. Returns could be simulated under the assumption of
normality, but using a covariance matrix created using EWMA. Again, this method is
most relevant for option-affected portfolios.
x Analytical VaR using EWMA. We will explain this method here, building on the
discussion in Section III.A.2.4.

Of these three methods, the last two make use of the assumption that returns are conditionally
normally distributed. In Section III.A.3.3 we explained that normality is a much more reasonable
assumption to make if we properly account for changes in volatility. One way to do this is to
calculate the covariance matrix using the EWMA measures of variance and covariance. This is
preferable to the standard measures of unconditional variance and covariance, which are slow to
respond to new information when estimated using long samples.13

Equation (III.A.3.1) shows the EWMA equation for variance. The analogous equation for
covariance between assets 1 and 2 is:

12 See the discussion of ‘historical observation period’ in Basel Committee ‘Amendment to the Capital Accord to

Incorporate Market Risks’, January 1996, at p. 44.


13 See Sections II.B.5 and II.B.6 for descriptions of these standard measures.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 13
The PRM Handbook – III.A.3 Advanced VaR Models

Ƴˆ 12,t 1  ƫ r1,t 1r2,t 1  ƫƳˆ 12,t 1 ,

where r1,t–1 and r2,t–1 are yesterday’s returns for assets 1 and 2, respectively. Note that when
constructing a large covariance matrix it is always important to ensure that it is positive semi-
definite, otherwise portfolio volatility may not be defined (see Section II.D.3.2). If we use a
different value of ƫ for each variance and covariance term, the matrix will not necessarily be
positive semi-definite. RiskMetrics gets around this problem by using the same value for ƫ
(being 0.94 in the case of daily data) throughout the covariance matrix.

When calculating VaR we need to forecast portfolio variance for the horizon of interest, say, 10
business days. Unlike GARCH, EWMA is a non-stationary model of variance. That is, under
EWMA, variance is not mean-reverting. This means that the EWMA forecast of any future
variance (or covariance) is the same as the estimate made today. Equivalently, as explained in
Section III.A.3.3.1, the average volatility over any forecast horizon is a constant, equal to the
volatility estimated today.

Once the covariance matrix has been defined, it can then be used for VaR calculations using
either:
x the analytical method (appropriate for simple linear portfolios) or
x MC simulation (best for option-affected portfolios).

In the analytical method (See Section II.A.2.4) the h-day VaR estimate at the significance level ơ is
given by:
VaRơ,h = ZơPƳ,     (III.A.3.4)

where Zơ is the standard normal ơ critical value, P is the current value of the portfolio and Ƴ is the
forecast of the standard deviation of the h-day portfolio return.

The standard deviation in (III.A.3.4) is computed using a forecast covariance matrix of h-day
returns as follows (see Section II.D.2.1):
(i) Representation at the asset level,

Ƴ w ' Vw
where w = (w1, …, wn) is the portfolio weights vector and V is the h-day forecast of
the covariance matrix of asset returns.14

14 Equivalently, VaRơ,h = ZơƳP&L  Zơ p ' Vp , where p = P w is the vector of nominal amounts invested
in each asset.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 14
The PRM Handbook – III.A.3 Advanced VaR Models

(ii) Representation at the risk factor level,

Ƴ Ƣ' VƢ
where Ƣ = (Ƣ1, …, Ƣn) is the portfolio sensitivity vector and V is the h-day forecast of
the covariance matrix of risk factor returns.15

To calculate analytic VaR using EWMA we simply calculate today’s portfolio variance as above.
To convert to a 10-day horizon we simply multiply the one-day VaR by ¥10; or we use a 10-day
covariance matrix for V. In either case, the results will be the same. The 10-day matrix can be
obtained, using the square root of time rule, by multiplying every element of the one-day
covariance matrix by 10. But note that these two conversion methods assume both constant
volatility and no serial correlation in the forward projection of volatility, which is one reason why
the EWMA VaR estimate does not fully reflect volatility clustering.

By contrast, for non-linear portfolios the MC VaR method uses a covariance matrix, and you
should be sure to use the 10-day covariance matrix directly in the simulations. You will get an
incorrect result if you simulate the one-day VaR and multiply the result by ¥10

Example III.A.3.2: Analytical method with EWMA for two-asset portfolio


Consider a simple portfolio with $1m invested in asset 1 and $2m invested in asset 2. Suppose
the EWMA model estimated on daily returns for asset 1 gives a variance estimate of 0.01, for
asset 2 returns the EWMA variance is 0.005 and the EWMA covariance is 0.002. What is the 5%
10-day VaR?

§ 0.01 0.002 ·
We have p = (1, 2)c and V is the matrix ¨¨ ¸¸ . So the P&L volatility = p ' Vp =
© 0.002 0.005 ¹
¥0.038 = $0.19m and the 5% one-day VaR is 1.645 u 0.195 = $0.3207m. Thus the 5% 10-day
VaR = 0.3207 u ¥10=$1.014m.16

This is high, because the assets have a very high variance (and covariance). For instance, a daily
variance of 0.01 corresponds to an annual volatility of ¥2.5 = 158%!In fact, the 1% 10-day VaR
for this portfolio is 2.33 u 0.195 u ¥10=$1.43m, so almost half the amount invested would be
required for risk capital to cover this position! However, the example illustrates an important

15Equivalently VaRơ,h = ZơƳP&L  Zơ p ' Vp , where p = PƢ is the vector of sensitivities to each risk factor
in nominal terms.
16Of course, the same result would be obtained if V is the given matrix but with each element multiplied by 10 and we
calculated  Zơ p ' Vp .

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 15
The PRM Handbook – III.A.3 Advanced VaR Models

weakness in the use of the analytic VaR method for long-only positions. Being based on the
assumption that portfolio P&L is normally distributed, there is a chance, however small, that the
estimated VaR will be more than the total investment. For a long-only position, this clearly is a
nonsensical result.

Example III.A.3.3: Analytical method for portfolio mapped to two risk factors
Suppose a US investor buys $2m of shares in a portfolio of UK (FTSE100) shares, and the
portfolio beta is 1.5. Suppose the FTSE100 and GBP/USD volatilities are 15% and 20%
respectively (with corresponding variances of 0.152 = 0.0225 and 0.202 = 0.04) and their
correlation is 0.3. What is the 1% 10-day risk factor VaR in US dollars?

We have two risk factors (FTSE and GBP/USD) with p = (3, 2)c. Note that the $2m exposure
to the equity portfolio described above is equivalent in risk terms to a $3m exposure to the FTSE
index since beta is 1.5. The one-day variances are: 0.0225/250 = 0.00009 for the FTSE and
0.04/250 = 0.00016 for GBP/USD. With a correlation of 0.3, the one-day covariance is
(0.3u0.15u0.2)/250 = 0.000036. Hence,17

§ 90 36 ·§ 3 · § 3·
pcVp = 3 2 ¨ ¸¨ 2 ¸ u 10
5
342 428 ¨ ¸ u 10 5 = 0.01882
© 36 160 ¹© ¹ © 2¹
and the 1% 10-day VaR is therefore 2.32634u¥10u 0.01882 = $0.31845m.

In concluding this section, we can say that VaR calculations, whether analytical or simulation-
based, can be greatly improved by taking account of volatility clustering. In this regard EWMA is
far superior to the approach, unfortunately encouraged by regulators, which employs
unconditional volatility based on large samples of data.

III.A.3.4.2 VaR and GARCH


Alternatively, the issue of volatility clustering may be incorporated into VaR estimates using the
more sophisticated GARCH approach. While GARCH is undoubtedly more challenging to
implement than some other methods, it also has some distinct advantages. For example, because
GARCH is a more general model it can explain the characteristics of the data more precisely to
ensure that all evidence of non-normality and dependence is removed from the standardised
returns.

§ 0.00009 0.000036 ·
17 Note that the one-day covariance matrix is ¨ ¸ so the 10-day covariance matrix is
© 0.000036 0.00016 ¹
§ 0.0009 0.00036 · § 90 36 · 5
¨ ¸ ¨ ¸ u 10 .
© 0.00036 0.0016 ¹ © 36 160 ¹

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 16
The PRM Handbook – III.A.3 Advanced VaR Models

A distinct advantage of GARCH over EWMA is that GARCH is a mean-reverting model of


volatility. This fits better with the stylised facts observed in the market; we regularly observe that
volatility will tend toward a ‘mean’ or ‘steady-state’ value after a period of unusually high or low
risk. If we are forecasting volatility over, say, the next 10 days, then it could be helpful to take
account of this mean reversion feature. We cannot do this with EWMA volatility models because
they use the square root of time rule, as explained above.

Example III.A.3.4: Forecasting volatility with GARCH


Suppose that we have estimated a GARCH model for a stock index such that the mean return is
zero and the conditional variance equation is as follows:
Ƴ t2 5.0 u 10 6  0.07ƥt21  0.88Ƴ t21

In this case the steady-state variance (using (III.A.3.3)) is equal to 0.0001, equivalent to daily
volatility of 1% and annual volatility of 1% u ¥250 = 15.81% p.a. Suppose that we are estimating
VaR at a time when the market has recently been unusually quiet. The current estimate of
unconditional annual volatility (using, say, one year of daily data) stands at 13.0% and today’s
estimate of conditional daily volatility is 0.007746, equivalent to annual volatility of 0.7746% u
¥250 = 12.25%. Assume that today new and unexpected economic data hit the market, causing a
large return of +0.04 or 4%. To put this in perspective, a return of 4% in one day is a greater
than 5 standard deviation event (using unconditional volatility).

To forecast variance tomorrow we proceed as follows, substituting the appropriate values for
today’s shock (0.04) and today’s variance (0.00006):

Ƴ t21 5.0 u 10 6  0.07 u 0.04 2  0.88 u 0.00006 0.0001698

Notice the large size of forecast variance (0.0001698 vs. 0.00006) as variance reacts to today’s
market shock. When forecasting variance on the subsequent day (and thereafter) we do not
know the return shock so variance is forecast as:
Ƴ t2 2 5.0 u 10 6  (0.07  0.88) u 0.0001698 0.0001663

and similarly for the days afterwards. Table III.A.3.4 shows the daily variance forecasts for each
day in the 10-day horizon.

While tomorrow’s forecast volatility is significantly higher than today’s, thereafter the forecast
volatility falls slightly. The GARCH model tells us that in the absence of any further shock,
volatility will gradually revert towards the steady-state volatility. We obtain a forecast of volatility

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 17
The PRM Handbook – III.A.3 Advanced VaR Models

over the next 10 days by summing the 10 daily variances, multiplying by 250/10 and taking the
square root. This gives 19.7% pa. In contrast, the unconditional volatility forecast for the next 10
days is approximately 13% – today’s large return having only a marginal impact due to its small
weighting of 1/250.

Table III.A.3.4 Forecasting Volatility with GARCH


Day Variance Equivalent volatility % pa
T+1 0.0001698 20.6%
T+2 0.0001663 20.4%
T+3 0.0001630 20.2%
T+4 0.0001598 20.0%
T+5 0.0001569 19.8%
T+6 0.0001540 19.6%
T+7 0.0001513 19.4%
T+8 0.0001487 19.3%
T+9 0.0001463 19.1%
T+10 0.0001440 19.0%
10-day horizon
i.e. T+1 to T+10 Sum = 0.0015601 19.7%

The significance of this for the VaR estimate is obvious; using the GARCH conditional volatility
forecast will significantly boost the VaR, signalling to risk managers that risk should either be
substantially reduced or capital increased to withstand the new high-risk environment.

This example also illustrates the point that the square root of time rule is inappropriate in a world
of volatility clustering. The GARCH model tells us that in this situation, volatility is likely to
increase initially and then gradually decline over the 10-day horizon. To assume constant
volatility for 10 days is not suitable.

For the professional risk manager calculating VaR, the task of implementing GARCH techniques
might appear daunting. How might she actually go about it? There are a number of possibilities:
x Analytical VaR. As explained in Section III.A.3.4.1, but this time using a covariance
matrix based on GARCH variance–covariance forecasts over the risk horizon. This is a
convenient solution for linear portfolios.
x Historical simulation using volatility weighted data. As explained in Section III.A.3.4.1, but this
time standardising returns using GARCH volatility estimates. This could be done quite
simply, using a univariate GARCH model for each asset/factor. This method makes no
assumption about the distribution of standardised returns apart from independence.
x Monte Carlo simulation. A GARCH covariance matrix forecast could be used to simulate
returns going forward. This would allow for changes in volatility, as well as the

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 18
The PRM Handbook – III.A.3 Advanced VaR Models

underlying, over the risk horizon – an important advantage for portfolios containing
options.

Professional risk managers are generally analysing investment or trading portfolios containing
multiple assets. To use some of the GARCH methods listed above it is necessary to evaluate an
entire covariance matrix. This can present substantial implementation problems. The difficulty
compounds as the number of assets or risk factors grows, particularly as we must ensure that the
covariance matrix is positive semi-definite. Various approaches have been suggested in the
academic literature, and these are surveyed by Bauwens et al. (2003). One very simple alternative
is to use GARCH volatilities but assume a constant correlation between assets or risk factors, as
proposed by Bollerslev (1990). Variance terms in the matrix are estimated using the simple
GARCH(1,1) specification as shown in (III.A.3.2). The covariance terms in the matrix are
formulated as follows:
Ƴ ij ,t 1 Ʊij Ƴ i ,t 1Ƴ j ,t 1

where Ʊij is the sample correlation of mean-corrected returns. Ease of estimation is achieved by

constraining the correlation to be constant over the estimation period and by ignoring cross-
market effects in volatility.

The benefits of the GARCH approach to VaR estimation have recently been illustrated by
Berkowitz and O’Brien (2002). They have shown that a simple reduced-form GARCH
implementation produces regulatory capital estimates that are an improvement on the methods
used by some large commercial banks. The reduced-form GARCH approach applies a univariate
GARCH model directly to portfolio P&L data, thus avoiding the need for a large covariance
matrix. As in the standard historical simulation approach described in Chapter III.A.2, an
artificial history of daily P&Ls is created using the current portfolio weights. But instead of
taking a lower percentile of the empirical distribution of these P&Ls for the VaR estimate, they
apply a GARCH model to the portfolio returns series and use formula (III.A.3.4). They find that
the VaR estimates based on this type of GARCH model are more sensitive to changes in
volatility; they allow for more risk-taking (or less capital) when volatility is low and less risk-taking
(or more capital) when volatility is high. These excellent results are achieved without adversely
affecting the number of exceptions in backtests (see Section III.A.2.8 for a description of VaR
model backtests); in fact the size of the maximum exception is reduced.

In summary, GARCH techniques have the potential to greatly enhance our modelling of market
risk and to ensure that appropriate capital buffers are in place.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 19
The PRM Handbook – III.A.3 Advanced VaR Models

III.A.3.5 Alternative Solutions to Non-normality


Here we consider some approaches to estimating VaR in the face of non-normality that differ
from those considered in Section III.A.3.4 in that they do not take account of volatility
clustering. We limit ourselves to discussing only three possibilities, although many others exist.

III.A.3.5.1 VaR with the Student’s t distribution


The Student’s t distribution is often proposed as a possible candidate for describing financial
returns because of its heavy tails. It is actually a poor candidate because it assumes returns are
i.i.d. Under the Student’s t distribution each day’s return is assumed independent of the previous
day’s return; we know this is unlikely to be the case because of the heat wave effect. The
implication of this is that the t distribution will tend to underestimate VaR in periods of market
crisis – the time when risk measurement is most crucial – and overestimate VaR when market
conditions are quiet. Backtests will reveal clusters of exceptions where actual losses exceed VaR.

Nevertheless the Student’s t distribution remains quite popular with some professional risk
managers. Statistical background is provided in Section II.E.4.6. The standard Student’s t has
only one parameter, ƭ, the ‘degrees of freedom’. The distribution was originally designed for
working with small samples where the degrees of freedom are one less than the sample size. As ƭ
approaches infinity, the distribution converges to the normal.

Under the standard Student’s t distribution:


x the mean is equal to zero,
ƭ
x the variance is equal to ,
ƭ2
x the skewness is equal to zero and
3 ƭ  2
x the (raw) kurtosis is equal to .
ƭ4

In VaR applications we will be working with large data sets and attempting artificially to select the
parameter ƭ to fit the shape of the tails of the distribution (that is, to match the sample
kurtosis).18 Since the observed variance will not be equal to ƭ/ ƭ  2 it will be necessary to scale
the variance.19 It will generally not be necessary to scale the mean as mean returns (at daily or
higher frequencies) are close to zero. Dowd (2002) explains how to adapt the standard Student’s t
distribution for VaR calculations (for a single asset) as follows:

18It is also possible to estimate the degrees of freedom parameter using maximum likelihood techniques.
19This is analogous to the way in which we adapt the standard normal distribution. The standard normal has variance
and standard deviation of one. When we use data having an observed standard deviation different from one, we divide
the observed standard deviation by unity.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 20
The PRM Handbook – III.A.3 Advanced VaR Models

(a) Select the degrees of freedom parameter by matching it to the sample


kurtosis such that:
3 ƭ  2
( Raw )Kurtosis . (III.A.3.5)
ƭ  4
(b) The empirical variance should be scaled by:
ƭ2
. (III.A.3.6)
ƭ
(c) Select the appropriate critical point from the t distribution, based on the
desired level of probability (e.g. 0.01) and the degrees of freedom
selected in (a).
(d) Proceed with VaR calculation using, for instance, the analytical method
– but the MC VaR model could also be applied.

Example III.A.3.5: VaR with Student’s t


For purposes of comparison we use the USD/JPY data described in Section III.A.3.2. The
returns have empirical excess kurtosis of 6.241, equivalent to raw kurtosis of 9.241. Using equation
(III.A.3.3) and solving for ƭ gives a value of 4.961. The empirical variance is scaled using equation
(III.A.3.4) to give adjusted daily variance of:
4.961  2
0.00718 2 u 3.07693 u 10 5 ,
4.961

being equivalent to daily volatility of 0.005547. The critical value of the t distribution can be
found in Excel using the TINV function.20 At the 99% level of confidence and with 5 degrees of
freedom, the critical value of the t distribution is 3.36 (the comparable critical value of the normal
distribution is 2.33).

We use this information to calculate VaR using the analytical method for a position held long
$1m. We choose a 10-day holding period and ignore expected returns. We compare VaR for the
Student’s t with VaR from the more familiar normal distribution. Note that the square root of
time rule is appropriate here as returns are assumed i.i.d.:

x Student’s t VaR = 3.36 u 10 u 0.005547 u $1,000,000 $58,938

x Normal VaR = 2.33 u 10 u 0.00718 u $1,000,000 $52,903

20 Note that the Excel TINV function assumes a two-tailed probability distribution, whereas VaR calculations generally

apply a one-tailed probability distribution. When using TINV, take care to double the probability parameter to get the
appropriate value for a one-tailed distribution. If you are interested in probability of 0.01 (i.e. one-tailed 99%
confidence), then you should instead use a probability of 0.02. Note also that the Excel TINV function assumes that
ƭ is an integer. In this example we use ‘=TINV(0.02,5)’. If we instead used ‘=TINV(0.02,4.961)’ Excel would round
4.961 down to 4, returning a value of 3.75.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 21
The PRM Handbook – III.A.3 Advanced VaR Models

The Student’s t VaR estimate is 11% higher than that calculated using the normal distribution.
This example illustrates how the heavier-tailed Student’s t distribution will tend to give larger VaR
estimates than the normal. Applying the Student’s t approach will tend to result in greater capital
requirements over time (or reductions in the amount of risk taken).

Potentially these larger VaR estimates afford financial institutions greater protection from
extreme variations as they will respond either by reducing risk or increasing capital. Given the
existence of volatility clustering, however, even an 11% increase in VaR will not be sufficient
protection in times of market crisis. Returning to Figure III.A.3.4, we see that in 1998,
conditional volatility more than tripled, reaching a peak of 35% p.a.!

III.A.3.5.2 VaR with Extreme Value Theory


Some VaR applications focus on the ‘tail behaviour’ of financial returns distributions. For
instance, economic capital is often estimated at an extremely high percentile, such as 99.97% for
a AA company (see Section III.0.2.3). For another example, when conducting stress tests the
possible behaviour of portfolio returns is pushed to an extreme limit (see Chapter III.A.4).

Several large banks have been developing risk capital models based on extreme value theory
(EVT). In these models, it is not VaR but an associated risk metric called conditional VaR – also
called ‘expected shortfall’, ‘expected tail loss’ or ‘tail VaR’ by various authors – that is estimated.
Whereas VaR is the cut-off point, above which the largest losses occur, the associated conditional
VaR is the average of these largest losses. For instance, the 1% VaR based on 1000 P&Ls is the
10th largest loss; the 1% conditional VaR is the average of these 10 largest losses.

Though not admissible under the Basel rules for the computation of regulatory capital,
conditional VaR measures are commonly favoured for the computation of economic capital.
That is because conditional VaR is sub-additive, i.e. the sum of the component conditional VaRs
can never be less than the total conditional VaR (see Section III.A.3.6). Of course, any risk metric
that is not sub-additive is not a good risk metric. The incentive for holding portfolios just
dissolves if it is better to assess risk on individual positions and simply add up the total! When
VaR is modelled using normal distributions for the risk factor returns VaR is always sub-additive.
However, when the VaR model is extended to allow for heavy-tailed risk factor distributions then
sub-additivity, in theory, need not hold. However, in practice market VaR is almost always found
to be sub-additive; by contrast, there can be real problems when VaR is applied to credit risk, due
to lack of sub-additivity.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 22
The PRM Handbook – III.A.3 Advanced VaR Models

Extreme value distributions, as their name suggests, examine the distribution of the extreme
values of a random variable, which is typically assumed to be i.i.d. These extreme returns (or
exceptional losses) are extracted from the data and an extreme value distribution may be fitted to
these values. There are two approaches. Either one models the maximal and minimal values in a
sample using the generalised extreme value distribution (GEV) or one models the excesses over a pre-
defined threshold using the generalised Pareto distribution (GPD).

For instance, suppose the underlying time series consists of hourly observations on a portfolio
P&L. On each day, we record the maximal loss and subsequently fit these daily data using a
GEV distribution. Alternatively, we forget about the time spacing of the data, but record only
those P&Ls that exceed a certain loss threshold, regardless of when this happens. Then we
would fit the data using the GPD.

EVT models are often fitted using some form of maximum likelihood technique so the data
requirements can be substantial. Very large data sets are crucial for EVT models as we are
concerned only with the tail of the distribution and we require many extreme data points for
robust estimation of parameters. The assumption that the observations are i.i.d. over such long
sample periods is not very realistic in financial markets except, perhaps, when they have already
been standardised using a volatility clustering model, as explained above. Nevertheless many
banks do apply GEV distributions for intra-day VaR estimates. The GPD may be used to
estimate conditional VaR; in fact there is a simple formula for this once the density has been
fitted. However, the conditional VaR is quite sensitive to the choice of threshold loss, which
must first be defined. For instance, the threshold could be set at the VaR that is estimated using a
standard VaR model and then the GPD density can be fitted to obtain a more precise estimate of
the conditional VaR.

III.A.3.5.3 VaR with Normal Mixtures


A normal mixture density function is a sum of normal density functions. For example, a mixture

of only two normal densities f1(x) = ƶ x ; µ1 , Ƴ 12 and f2(x) = ƶ x ; µ 2 , Ƴ 22 is the density

function:21
g(x) = pf1(x) + (1 – p)f2(x).

The parameter p can be thought of as the probability that observation x is governed by density
f1(x). In effect there are two regimes for x, one where x has mean µ1 and variance Ƴ 12 and another

21That is, g ( x )
¬
1/ 2
2

¼ ¬
1/ 2
2

p ª« 2ưƳ12 exp  21 x  µ1 /Ƴ12 º»  1  p ª« 2ưƳ 22 exp  21 x  µ 2 /Ƴ 22 º» Note that there
¼
is only one random variable so it would be misleading to call the densities f1 and f2 ‘independent’.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 23
The PRM Handbook – III.A.3 Advanced VaR Models

where x has mean µ2 and variance Ƴ 22 In the case where x denotes the return on a portfolio, one
can naturally identify these two regimes as a ‘high volatility’ (or even, for an equity portfolio, a
‘crash market’) regime with a low probability and the rest of the time a regime that governs
ordinary, everyday market circumstances.

Consider a mixture of two zero-mean normal components, i.e. µ1=µ2=0.In this case the
variance is just
NM(2) variance = p Ƴ12 + (1 – p) Ƴ12    (III.A.3.7)

The skewness is zero and the kurtosis is given by:


pƳ 14  (1  p )Ƴ 42 
NM(2) kurtosis = 3 2
. (III.A.3.8)
ª¬ pƳ  (1  p )Ƴ º¼
2
1
2
2

The kurtosis is always greater than 3, so the mixture of two zero-mean normal densities always
has a higher peak and heavier tails than the normal density of the same variance. For instance,
Figure III.A.3.5 shows four densities:
x three zero-mean normal densities with volatility 5%, 10% (shown in grey) and 7.906%
(shown in red); and
x a normal mixture density, shown in black, which is a mixture of the first two normal
densities with probability weight of 0.5 on each of the grey normal densities.

From formula (III.A.3.7), the variance of this mixture is 0.5 u 52 + 0.5 u 102 = 62.5. Since 7.906
= —62.5, the mixture has the same volatility as the red normal curve. However, it has kurtosis of
4.87 (substitute p = 0.5, Ƴ1= 5 andƳ1= 10 into formula (III.A.3.7)). In other words, it has an
excess kurtosis of 1.87, which is significantly greater than zero, the excess kurtosis of the
‘equivalent’ normal (red) curve.

More generally, taking several components of different means and variances in the mixture can
lead to almost any shape for the density. Maclachlan and Peel (2000) provide pictures of many
interesting examples. The parameters of a normal mixture density function can be estimated
using historical data. In this case the best approach is to employ the expectation–maximisation
(EM) algorithm. The idea, as always, is to choose the parameters to maximise the likelihood of
the data. But the EM algorithm differs from standard MLE in that the EM algorithm allows for
some ‘hidden’ variables in the data that we cannot observe, so that we can only maximise the

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 24
The PRM Handbook – III.A.3 Advanced VaR Models

expected value of the likelihood function and not the likelihood function itself.22 Alternatively,
the parameters of a simple (e.g. two-component) normal mixture can be chosen in a scenario
analysis of portfolio risk as, for instance, in Example III.A.3.7 below.

Figure III.A.3.5: A normal mixture density

0.08

0.06

0.04

0.02

f1(x) f2(x) Mixture Normal

There is no explicit formula for estimating VaR under the assumption that portfolio returns (or,
equivalently, P&L) follow a normal mixture density. However, there is an implicit formula, so the
problem is akin to that of implying volatility from the market price of an option (see Section
II.G.1). That is, we can apply the Excel ‘Goal Seek’ (see Section II.G.1.4) or ‘Solver’ (see Section
II.G.2.4) methods to back out the normal mixture VaR. To see how, suppose for the moment
that we have a normal distribution for the P&L of a linear portfolio. Then the analytic formula
for VaR follows directly from the definition of VaR. That is, by definition,
Prob(P&L < –VaRơ) = ơ.

So if P&L has a normal distribution with mean µ and standard deviation Ƴ, we have
Prob (Z < [–VaRơ–µ]/Ƴ) = ơ,

where Z is a standard normal variable. Hence [–VaRơ–µ]/Ƴ =–Zơ, the ơ critical value of Z, and
rearranging this gives our analytic formula for normal VaR:
VaRơ=ZơƳ–µ.    (III.A.3.9)

22 The EM algorithm is far beyond the scope of the PRM syllabus, but interested readers should consult the book by

Maclachlan and Peel (2000) which deals almost exclusively with this approach.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 25
The PRM Handbook – III.A.3 Advanced VaR Models

Using exactly the same type of argument we can derive the normal mixture VaR, but this time it
is given by an implicit formula. For instance, with only two zero-mean components in the
mixture (so there are only three parameters p, Ƴ1 and Ƴ2, the standard deviations now being those
of h-day P&L in each regime) we know everything except VaRơ in the identity:
p Prob(Z < –VaRơ/Ƴ1) + (1 – p)Prob(Z < –VaRơ/Ƴ2) = ơ.  (III.A.3.10)

Hence the h-day VaRơ can be ‘backed out’ from (III.A.3.10) using an iterative approximation
method such as Goal Seek or Solver.

Example III.A.3.6: Scenario VaR using normal mixtures


A risk manager assumes there is a small chance, say 1 in 100, that the market will crash, in which
case the expected portfolio return over a 10-day period is –50% with an (annualised) volatility
around this mean return of 100%. However, at the moment in ordinary market circumstances
there are steady positive returns of 10% per annum with a volatility of 20%. His portfolio is
currently valued at $2m. What is his 10-day VaR?

To answer this we extend (III.A.3.10) to the non-zero-mean case, and rephrase it in terms of the
means and standard deviations of returns in the two regimes, rather than P&L . This gives:
p Prob (Z < [–VaRơ– µ1]/PƳ1) + (1 – p)Prob(Z < [–VaRơ µ2@PƳ2) = ơ,  (III.A.3.11)

where p is the probability of regime 1 (i.e. 1/100 = 0.01), µ1 is the10-day return in regime 1 (i.e. –
0.5), Ƴ1=10-day standard deviation in regime 1 (i.e. 1/—25 = 0.2), µ2 is the10-day return in
regime 2 (i.e. 0.1/25 = 0.004), Ƴ2 is the10-day standard deviation in regime 2 (i.e. 0.2/—25 = 0.04)
and P is the current portfolio value ($2m).

Using the Excel spreadsheet NMVaR.xls with Solver (or Goal Seek)23 applied to cell B21 each
time we change the significance level, we obtain the ‘NM VaR’ figures in the first row of the
Table III.A.3.5.

Table III.A.3.5: NM VaR vs. normal VaR


Significance level 10% 5% 1%
NM VaR ($) 157,480 500,003 1,012,632
Equivalent normal VaR ($) 281,845 335,437 435,965
Ordinary normal VaR ($) 94,524 123,588 178,107

23 To apply Goal Seek, click Tools – Goal Seek – ‘Set cell B21 to value 0 by changing cell B18’.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 26
The PRM Handbook – III.A.3 Advanced VaR Models

The two ‘normal VaR’ figures are calculated using equation (III.A.3.9), or equivalently
VaRơ=[ZơƳ– µ]P,

where µ and Ƴ are the returns standard deviation and mean over the holding period. The
‘ordinary’ normal VaR figures are computed using the second (more likely) distribution of 10-day
mean and standard deviation of:
µ = 0.004 (i.e. a 10% annual return),
Ƴ = 0.04 (i.e. a 20% annual volatility).

That is, we ignore the possibility of a market crash in the ‘ordinary’ normal VaR. For the
‘equivalent’ normal VaR we use (III.A.3.7) to obtain an ‘equivalent’ standard deviation – and
similarly the equivalent mean is pµ1 + (1 – p)µ2

These adjust the ‘ordinary’ market circumstances mean and standard deviation to take account of
the possibility of a crash, but after that the VaR is computed using the normal assumption for
portfolio returns.

From the results in Table III.A.3.5 it is clear that ignoring the possibility of a crash can seriously
underestimate the VaR. Even if one were always to assume a normal distribution, the VaR will be
almost three times larger when the standard deviation and mean are adjusted to account for the
possibility of a crash. It is the relationship between the NM VaR and the ‘equivalent’ normal VaR
that is really interesting. Our example exhibits some typical features of NM VaR:
x For low significance levels (e.g. 10% or 20%), the normal assumption can seriously
overestimate VaR (in this example, it was about twice the size of the NM VaR).
x For higher significance levels (e.g. 5% or 1%), the normal assumption can seriously
underestimate VaR (in this example, it was about half the size of the NM VaR).

The significance level at which the NM VaR becomes greater than the normal VaR based on an
equivalent volatility/mean depends on the degree of excess kurtosis in the data. When the excess
kurtosis is relatively small it may be that the 5% (or even 1%) VaR is actually smaller under the
NM assumption. In fact, when the parameters are estimated using actual historical observations
on the portfolio returns it is common to find that the normal mixture VaR is less than the normal
VaR at the 5%, and even at the 1% level.

Finally, it should be noted that, although we have not described the generalisation of the MC
VaR model to the normal mixture case, this is a simple (one- or two-line) extension to the
simulation code. We outline the method for the two-component zero-mean case:

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 27
The PRM Handbook – III.A.3 Advanced VaR Models

x Define two risk factor covariance matrices V1 and V2 and associated probability
weights (p, 1 – p) in the mixture (either estimated from historical data using the EM
algorithm, or assumed in a scenario analysis).
x Break each of the 10,000 or so simulations into two steps: (a) draw from a Bernoulli
variable with probability p; (b) if the result is ‘success’ use V1 to generate the correlated
risk factors in the simulation, else use V2.

In summary, the normality assumption is not necessary for the analytic and Monte Carlo VaR
methodologies. This section has shown how the analytic and MC VaR methods can be used with
a normal mixture distribution for the portfolio returns. Such a distribution is better able to capture
the skewness and excess kurtosis that we commonly observe in portfolios of most types of
financial assets. Hence, if the parameters of the normal mixture distribution are estimated from
historical data, the resulting VaR estimate will reflect the actual properties of the data more
accurately than the normal VaR. Another very useful application of normal mixture VaR is to
probabilistic scenario analysis, where the portfolio returns are generated by a high-volatility (or
‘crash’) component with a low probability and, the rest of the time, by another component that
applies to the ordinary market circumstances.

III.A.3.6 Decomposition of VaR


As explained in Chapter III.0, VaR models form the basis of internal economic capital allocation
and limit setting. Hence, firms need to aggregate VaR over different risk types and over different
business activities and, likewise, to disaggregate VaR into different components. Disaggregation
of risk is used in risk management for setting limits, assessing new investments, hedging and
performance measurement. It allows risk managers to understand the drivers of risk in their
portfolio.

A number of different rules for disaggregating risk are considered in this section. Their common
theme is that each rule is based on the analytic VaR formula; that is, the rule is derived using the
rules for the variance operator (see Section II.E.3.4). However, it is common practice, for
instance in the allocation of economic capital, to apply these rules to all VaR estimates,
irrespective of the model used to compute them. But not all these ‘rules’ for VaR decomposition
are appropriate for historical or MC VaR estimates. Here the VaR corresponds to a percentile,
but percentiles do not satisfy simple rules, like the variance operator. Hence the usefulness of
each rule varies, depending on the intended application.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 28
The PRM Handbook – III.A.3 Advanced VaR Models

III.A.3.6.1 Stand-alone Capital


Suppose a line manager operating on a VaR-based risk limit wants to assign separate VaR limits
to the equity and foreign exchange desks so that aggregate losses only exceed the aggregate VaR
limit an appropriately small proportion of the time. But since risk limits do not correspond to real
capital, it is not necessary for the two limits to add up to his overall VaR limit. On the contrary,
in theory it could even be that the VaR limit for the equity desk, say, exceeds the overall risk limit!
We shall see why, and how, in this section.

In Example III.A.3.3 we considered a simple portfolio that has been mapped to two risk factors,
an equity index and an exchange rate. The total 1% 10-day VaR due to both risk factors was
estimated as $319,643. However, the VaR due to equity risk alone was:
Equity VaR = 2.33 u 10-day standard deviation FTSE u $3m

= 2.33 u 0.15 u 10/ 250 × 3 = $209,700.


Similarly,

FX VaR = 2.33 u 0.2 × 10/ 250 u 2 = $186,400.


Hence,
Equity VaR + FX VaR = $396,100 > $319,643 = Total VaR.

In the normal analytic VaR model VaR follows the same ‘rules’ as the standard deviation
operator.24 In this case it is easy to show that VaR is ‘sub-additive’ in the sense that:
Total VaR d Sum of component VaRs
with equality if, and only if, all the correlations in the covariance matrix V are one.

We shall state the complete rule for the decomposition into two components in (II.A.3.12) below.
It shows that the total VaR is only equal to the sum of the component VaRs if there is perfect
correlation between the components. But in the above example the risk factors had a correlation
of 0.3, which is much less than one. So the total VaR was much less than the sum of the two
component VaRs. In fact, had the correlation been large and negative, the total VaR might
actually have been less than either of the component VaRs.

The fact that VaR aggregates take account of correlations, and that these are typically far less than
one, is one of the reasons why banks favour VaR over ‘traditional’ risk measures, such as
duration for a bond portfolio, or the Greeks for an options portfolio, or beta for an equity
portfolio. These traditional risk measures ignore the benefits of diversification that apply in a

24But only when the risk factor returns are assumed to be normal: as mentioned already in Section III.A.3.5.2, if risk
factor returns are heavy-tailed then VaR need not be sub-additive.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 29
The PRM Handbook – III.A.3 Advanced VaR Models

portfolio exposed to multiple risk factors. In contrast, VaR accounts not only for the risks due to
the risk factors themselves, but also for the less than perfect correlation of risk factors when
aggregating the risk. Also, VaR is a risk measure with consistent dimensions across markets and
therefore allows greater consistency in setting policy across products and in evaluating the
relationship of risk and return.

III.A.3.6.1.1 Decomposing non-linear portfolios


Decomposition of risk becomes more complex for portfolios with non-linearities. VaR measures
for such portfolios are much more likely to violate the criterion of sub-additivity. For this reason
VaR is criticised as being a poor ‘risk metric’. It is possible to construct extreme cases where
individual portfolios containing, say, short, well out-of-the-money digital options have a very low
or even zero VaR at, say, the 95th percentile. However, if two similar portfolios are combined
together, the diversified portfolio has higher VaR than each of the components. In this
anomalous situation, stand-alone capital is not appropriate for limit setting.25 Indeed, a strong
argument could be made here to avoid VaR and to use conditional VaR instead, for limit-setting
purposes.

Of course typically, non-linear portfolios will be analysed using the simulation approaches.
Disaggregation of VaR (or conditional VaR) can be undertaken in the context of simulation by
restricting the risk factor scenarios in different ways. For instance, total VaR can be disaggregated
into an equity VaR component, corresponding to a lower percentile of a simulated portfolio
returns distribution with only the equity risk factors changing, and an FX VaR component,
corresponding to a lower percentile of a simulated portfolio returns distribution with only the
foreign exchange rates changing. Since percentiles do not obey simple ‘rules’ (except that a
percentile is invariant under a monotonic transformation of variables) there is, in this case, no
simple rule that relates the sum of equity VaR and FX VaR to the total VaR.

III.A.3.6.1.2 Specific vs. Systematic Risk


Another way of disaggregating risk is to decompose VaR into its systematic and specific
components. That is, those risks that apply to the market/factor generally and those that arise
from lack of diversification or deviations from the market portfolio. A VaR model can be used to
assess the ‘specific risks’ of a portfolio – i.e. the risks that are not captured by the risk factor
mapping.

25 Each separate portfolio could be within limit yet the business overall could be in breach of limits when considered

on a diversified basis.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 30
The PRM Handbook – III.A.3 Advanced VaR Models

Example III.A.3.7: Specific VaR


To obtain the beta of 1.5 for our portfolio of UK equities in Example III.A.3.3 we re-created an
artificial price history of the portfolio using the current portfolio weights, and regressed the time
series of returns to this portfolio on the FTSE index returns. This gave the Excel output shown
in Table III.A.3.6.
Table III.A.3.6: A CAPM regression
Regression Statistics Coefficients Standard Error t Stat
R Square 0.7284 Intercept -0.00016 0.00045 -0.34991
Standard Error 0.00802 FTSE 1.50312 0.02876 52.26426

The ‘standard error’ is the standard deviation of the model residuals (see Section II.F.6). Since the
returns are observed daily, the 10-day standard deviation of the residuals in this model is
0.00802 u —10 = 0.02536

Hence, the 1% 10-day specific VaR is 2.33 u 0.02536 u $2m = $118,184.

Of course, having re-created an artificial price history of the portfolio using the current portfolio
weights, we did not need to estimate a factor model in order to calculate the total VaR. We could
have simply obtained the daily standard deviation of the ‘re-created’ portfolio returns and used
formula (III.A.3.4). In this case, we would have a daily standard deviation 0.01636, giving a direct
estimate of 1% 10-day total VaR as:
Total VaR = 2.33 u 0.01636 u —10 × $2m = $241,117.

However, the 1% 10-day systematic VaR (in this case, the equity risk factor VaR) is $209,700.
Adding this to the specific VaR of $118,184 we thus obtain:
Systematic VaR + Specific VaR = $327,884.

Clearly, simply adding systematic VaR and specific VaR is a very conservative way to estimate the
total VaR, that is, direct estimation of total VaR will normally give a result that is considerably lower
than the sum of systematic and specific VaR.

III.A.3.6.1.3 Sub-additivity
To understand why this is so, we do some simple algebra showing that the analytic VaR (for
linear positions) obeys the following sub-additive rule for any decomposition of total VaR into two
components, VaR1 and VaR2 where the component risks have correlation Ʊ:
Total VaR2 = VaR 12 + VaR 22 + 2 ƱVaR1 VaR2. (III.A.3.12)

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 31
The PRM Handbook – III.A.3 Advanced VaR Models

Note that expression (III.A.3.12) simplifies when the correlation is one or zero:
If Ʊ = 1: Total VaR2 = VaR 12 + VaR 22 + 2 VaR1 VaR2 = (VaR1 + VaR2)2 and
Total VaR = VaR1 + VaR2.
If Ʊ = 0: Total VaR2 = VaR 12 + VaR 22 and

Total VaR = —( VaR 12 + VaR 22 ).     (III.A.3.13)

Now recall that if the factor model is capturing most of the variation in the portfolio then
specific risks and systematic risks should be uncorrelated (see Section II.F.2). In that case, the
Total VaR will be closer to the square root of the sum of the squared component VaRs than to
the simple sum of the two VaRs. In other words, simply adding specific VaR and systematic VaR
is not a good way to estimate Total VaR if the systematic and specific components are (more or
less) uncorrelated.

When disaggregating VaR into different components, it is sometimes assumed that all
component correlations lie between zero and one. In this case the two ways of calculating total
VaR (as a straightforward sum of VaRs, or as the square root of the sum of the squared VaRs)
provide approximate upper and lower bounds for the total VaR, respectively.

III.A.3.6.2 Incremental VaR


Incremental VaR (IVaR) is a measure of how portfolio risk changes if the portfolio itself is
changed in some way. It is ideal for assessing the effect of a hedge or a new investment decision
on a trader’s VaR limit, or for assessing how total business risk would be affected by the
sale/purchase of a business unit. There are two ways of proceeding:

(a) The before and after approach. Here we measure the VaR under the proposed change, compare it
to the current VaR and take the difference. This is the best approach if the proposed change to
the portfolio is significant.

(b) The approximation approach. We can find an approximate IVaR by first calculating the DelVaR
vector (see below). This vector is then multiplied by another vector containing the proposed
changes in positions for each asset/risk factor. Like all approximations based on partial
derivatives, it is suitable only for examining small changes to the portfolio composition. Under
the analytical method26 the DelVar vector is calculated as follows:

26 With simulation methods a DelVaR vector could also be constructed, but with considerably greater difficulty. Each

element of the vector would be determined by assessing the change in total VaR (according to simulation) for a one-
unit change in the portfolio holdings of the relevant asset.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 32
The PRM Handbook – III.A.3 Advanced VaR Models

VpZơ
DelVaR , (III.A.3.14)
p'Vp 0.5

where V is the covariance matrix, p is the position vector and Zơ is the relevant critical value of
the normal distribution. Note that the denominator of this expression is simply the standard
deviation of the portfolio’s P&L. The DelVaR vector will contain an element for each asset/risk
factor in the covariance matrix (the first element will contain information relating to the risk of
the first asset/risk factor, and so forth).

Example III.A.3.8 Approximate IVaR


We continue with the example first presented in Example III.A.3.3. The portfolio consists of an
exposure to the FTSE index and exposure to the exchange rate since the investor is US$ based.
Note that the standard deviation of the portfolio’s P&L over a one-day horizon is equal to
$0.043382m (being ¥0.001882). Hence, for the base position where the portfolio is unchanged,
equation (III.A.3.14) becomes:
§ 0.00009 0.000036 · § 3 · § 0.000342 ·
Vp ¨ ¸u¨ ¸ ¨ ¸,
© 0.000036 0.00016 ¹ © 2 ¹ © 0.000428 ¹
§ 0.000342 u 2.33 y 0.043382 · § 0.018368 ·
DelVaR ¨ ¸ ¨ ¸.
© 0.000428 u 2.33 y 0.043382 ¹ © 0.022987 ¹

Now suppose that we consider hedging half of the currency exposure so that the exposure to the
exchange rate is reduced to only $1m, rather than the current $2m. We can construct a vector of
portfolio changes where the first element (the change in the exposure to FTSE) is equal to zero,
and the second element (the change in the exposure to currency) is –1. The incremental VaR can
be approximated as follows:
§ 0.018368 ·
IVaR 0 1 u ¨ ¸ 0.022987.
© 0.022987 ¹

In other words, the hedging decision will reduce 1% one-day VaR by approximately $22,987.

As the proposed change to the portfolio is quite large in this case, the approximation method is
unlikely to be accurate and the before and after method would be preferable. With the before and
after method, we know from Example III.A.3.3 that the one-day VaR of the original position is
2.33 u ¥0.001882 m$ = $101,080.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 33
The PRM Handbook – III.A.3 Advanced VaR Models

With the currency hedge, the 1% one-day VaR is $80,241 because


§ 90 36 ·§ 3 · § 3·
pcVp = 3 1 ¨¨ ¸¸¨¨ ¸¸ u 10 6 306 268 ¨¨ ¸¸ u 10 6 = 0.001186,
© 36 160 ¹© 1 ¹ © 1¹

and 2.33 u ¥0.001186 m$ = $0.08241m. Hence, according to the exact method, the IVaR is
101,080 – 80,241 = $20,839, which is less than the IVAR given by the approximation. In other
cases the actual IVaR will exceed the approximation, depending on the level of correlation
between assets.

III.A.3.6.3 Marginal Capital


Sometimes referred to as component VaR (CVaR), this method of decomposition is useful for
gaining a better understanding the drivers of risk within a portfolio. Sometimes it is also used for
performance measurement. Unlike the previous methods, marginal capital is additive; the sum of
each of the component VaRs will be equal to the total VaR.27

We take the DelVaR vector discussed above and multiply each element by the corresponding
position for each asset/risk factor. The result is a set of CVaRs for each asset. The sum of these
CVaRs will be equal to portfolio VaR. Since the analysis is performed using the DelVaR vector
of partial derivatives with respect to asset weights, it is only relevant for the current portfolio
weightings. Significant changes to the portfolio will change the risk contributions of the various
assets, necessitating new analysis based on the revised DelVaR vector.

Example III.A.3.9: Marginal capital


We continue to analyse the portfolio first introduced in Example III.A.3.3. The DelVaR vector,
calculated in Example III.A.3.8 is used here:
§ 0.018368 ·
DelVaR ¨ 0.022987 ¸ .
© ¹

The CVaR for equities is equal to the position ($3m) multiplied by 0.018368, or $55,104. The
CVaR for foreign exchange is equal to the position ($2m) multiplied by 0.022987, or $45,974.
Note that they sum to $101,078, which is approximately equal to $101,080, the 1% one-day VaR
(the error is due to rounding in the DelVaR vector).

CVaR is used to assign a proportion of the total risk that can be attributed to each component. For
instance, in this example the equity exposure is contributing 55,105/101,078 or around 55% of

27 This relationship is assured for the analytical method and if the portfolio is comprised of simple linear positions. It

may not hold for other cases.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 34
The PRM Handbook – III.A.3 Advanced VaR Models

the risk in the portfolio, while currency contributes around 45%. This technique can be used to
evaluate the risk of an asset (or asset class) in the context of a diversified portfolio. In contrast, the
stand-alone capital (in Section III.A.3.6.1) measures the risk of an asset class in isolation. Either
can be used for performance measurement purposes, although stand-alone capital is most
commonly used as the risk measure in a risk-adjusted performance measure context.

For instance, we would use stand-alone capital to compare the performance of the equity and
foreign exchange trading desks because it is generally argued that the diversification benefit
should not enter into the analysis. That is, the team managing foreign exchange risks presumably
has no say in the way the portfolio of businesses is constructed (whether or not there is an equity
desk, and the relative sizes of those businesses). They should therefore be neither rewarded nor
penalised for the diversification of the overall portfolio of businesses. Instead, performance
measures should be centred only on the issues over which they have direct control, being foreign
exchange risks in this example.

III.A.3.7 Principal Component Analysis


Principal component analysis (PCA) is a statistical tool that decomposes a positive semi-definite
matrix into its principal components.28 For instance, Section II.D.5.5 shows how PCA on an n u n
covariance matrix is used to write the portfolio variance as a sum of n positive terms that become
progressively smaller, with the first principal component explaining the largest part of the
variation in the system represented by the covariance matrix and so on. The nth principal
component explains the least variation – indeed, the variation captured by the lower-order
principal components is commonly ignored, because it just picks up the ‘noisy’ variation that we
would prefer to ignore.

PCA applied to a covariance matrix or a correlation matrix has many applications to financial risk
management. It is particularly effective in highly correlated systems such as term structures, i.e.
yield curves, or futures prices or even implied volatilities. Here only few components are needed
to explain almost all of the variation. In this respect PCA is a useful technique for reducing
dimensions. By retaining only the first few principal components – enough, say, to explain 95% of
the variation observed historically – it cuts out the ‘noise’ for the subsequent analysis.

The other great advantage of PCA is that the principal components are uncorrelated with each other.
This means, for instance, that one can perform a simple scenario analysis on each of the main
principal components separately, leaving the other components fixed. The change translates into

28 Positive definiteness is discussed in detail in Section II.D.5.4.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 35
The PRM Handbook – III.A.3 Advanced VaR Models

a meaningful scenario, i.e. one that could be observed historically. For instance, when applying
PCA to a covariance matrix of zero-coupon yields of different maturities,29 a scenario for the
change in the first principal component generally will mimic an almost parallel shift in the entire
yield curve.

Without PCA one would need to take care that only ‘correlated’ scenarios are used – for instance,
if the scenario specifies that the one-month interest rate increases by 100 bps, one could not have
the three-month interest rate decreasing by 200 bps in the same yield curve scenario. Correlated
scenarios are generated using the Cholesky decomposition of a covariance matrix – see Section
II.D.4.2. This problem is compounded if simulations are extended more than a short time into
the future using this method, as it is difficult to prevent generating implausible shapes in the
resulting yield curves.

III.A.3.7.1 PCA in Action


From 4 January 1993 until 20 November 2003 we have daily closing New York Mercantile
Exchange (NYMEX) futures prices, from 2 to 12 months out, on West Texas Intermediate
(WTI) light sweet crude oil and on natural gas. Some of these are shown in Figure III.A.3.6. The
system of futures returns is clearly very highly correlated – indeed there are perhaps just one or
two independent sources of information driving the whole system of futures prices.

The correlation matrix of the returns to futures from 2 to 12 months out, based on the entire
sample from 4 January 1993 until 20 November 2003, is shown in Table III.A.3.7. This exhibits
the pattern that is typical of term structures, with correlation decreasing as the maturity difference
increases. The correlations are so high that almost all the variation can be attributed to two or
perhaps three components. An analysis of eigenvalues tells us how many components we need.
The eigenvalues ƫ1, ƫ2, …, ƫn of an n u n correlation matrix sum to n, and the amount of variation
captured by the ith principal component is ƫi/n. So in our example, with n = 11, if the largest
eigenvalue is, say, 10, then the first principal component explains 10/11 = 90.9% of the variation.

29 More precisely, to the covariance matrix of the daily (or weekly or monthly) changes in yields.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 36
The PRM Handbook – III.A.3 Advanced VaR Models

Figure III.A.3.6: A highly collinear system

Crude Oil Futures Prices


40
2mth 4mth
35
6mth 8mth
10mth 12mth
30

25

20

15

10
Jan-93
Jul-93
Jan-94
Jul-94
Jan-95
Jul-95
Jan-96
Jul-96
Jan-97
Jul-97
Jan-98
Jul-98
Jan-99
Jul-99
Jan-00
Jul-00
Jan-01
Jul-01
Jan-02
Jul-02
Jan-03
Jul-03
Table III.A.3.7: Correlation matrix of returns

m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 m12


m2 1
m3 0.993 1
m4 0.984 0.997 1
m5 0.974 0.990 0.998 1
m6 0.963 0.981 0.993 0.998 1
m7 0.951 0.972 0.986 0.994 0.998 1
m8 0.939 0.962 0.978 0.988 0.995 0.999 1
m9 0.927 0.951 0.969 0.981 0.990 0.996 0.999 1
m10 0.915 0.940 0.960 0.974 0.984 0.991 0.996 0.999 1
m11 0.901 0.928 0.949 0.965 0.977 0.986 0.992 0.996 0.999 1
m12 0.888 0.916 0.938 0.956 0.969 0.979 0.987 0.992 0.996 0.999 1

Table III.A.3.8 shows the eigenvalues and the first three eigenvectors of this matrix (see Section
II.D.5 for an explanation of these). The first three eigenvalues show that the first principal
component explains 97.5% of the total variation, the second component explains a further 2.2%
(since 0.245/11 = 0.022) and the third component only explains a tiny amount, 0.145% of the
movements in the futures prices in our sample.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 37
The PRM Handbook – III.A.3 Advanced VaR Models

Table III.A.3.8: Eigenvectors and eigenvalues

Eigenvalues Future 1st Eigenvector 2nd Eigenvector 3rd Eigenvector


10.732 m2 0.293 0.537 0.623
0.245 m3 0.299 0.412 0.078
0.016 m4 0.302 0.280 -0.225
0.004 m5 0.304 0.161 -0.348
0.002 m6 0.305 0.054 -0.353
0.001 m7 0.305 -0.044 -0.276
0.000 m8 0.304 -0.132 -0.162
0.000 m9 0.303 -0.211 -0.028
0.000 m10 0.302 -0.282 0.115
0.000 m11 0.300 -0.350 0.247
0.000 m12 0.298 -0.411 0.364

The first three eigenvectors in Table III.A.3.8 are used to compute the first three principal
components as:
PC1 = 0.293 m2 + 0.299 m3 + ….. + 0.298 m12, (III.A.3.12a)
PC2 = 0.537 m2 + 0.412 m3 + ….. – 0.411 m12, (III.A.3.12b)
PC3 = 0.623 m2 + 0.078 m3 + ….. + 0.364 m12, (III.A.3.12c)

where m2, …, m12 denote the (normalised) returns to the futures of different maturities from 2
to 12 months out.

Since the eigenvectors are always orthogonal (see Section II.D.5.2), equations (III.A.3.12) can be
rewritten as:
m2 = 0.293 PC1 + 0.537 PC2 + 0.623 PC3,
m3 = 0.299 PC1 + 0.412 PC2 + 0.078 PC3,
..
..
m12 = 0.298 PC1 – 0.411 PC2 + 0.364 PC3.

This is known as the principal component representation of the system. Since the coefficients on PC1
are all approximately the same, when PC1 moves (holding the other components fixed) the term
structure of futures prices will shift in (almost) parallel fashion. For this reason PC1 is often
called the ‘trend component’ when PCA is applied to term structures. When PC2 increases the
term structure of futures prices will shift up at the short end but down at the long end – so PC2
is called the ‘tilt component’. And when PC3 increases the term structure shifts up at both ends
but – looking again at Table III.A.3.8 – the medium term futures prices will go down. Hence PC3
is called the ‘curvature’ component.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 38
The PRM Handbook – III.A.3 Advanced VaR Models

In this example the curvature component is not important – it corresponds to much less than 1%
of the movements normally found in futures prices. But in other examples – interest rates, for
instance – the curvature component can be more important. And when PCA is applied to other
types of systems, such as equities or currencies, we normally need more than three components
to model the system with sufficient accuracy. Of course, there will be no intuitive interpretation
of the components as there is for term structures – because we cannot ‘order’ a system of
equities or currencies in any sensible way – although the first principal component will normally
capture the common trend (if the system is sufficiently correlated that there is a common trend!).

III.A.3.7.2 VaR with PCA


PCA is commonly used in the analytic VaR method for cash flows, and the MC simulation VaR
method for interest-rate options portfolios. Perhaps the most important input in both these
approaches is the covariance matrix of risk factor returns – in this case, the covariance matrix of
changes in a zero-coupon yield curve. Typically this yield curve will have more than 10 different
maturities, so the dimension of the risk factor space is large. But, as we discussed earlier, large-
dimensional covariance matrices are very difficult to estimate using GARCH models. And
EWMA matrices normally assume the same value for the smoothing constant for all the risk
factors. This may seem fine if the risk factors are just one yield curve, but for international fixed
income portfolios the risk factors consist of many yield curves.

In order to take proper account of (less than) perfect correlation between these risk factors when
estimating the total VaR, we need to use a covariance matrix of the whole system – i.e. all yields
of all maturities in all countries. Then in the EWMA matrix it is very unrealistic to apply the same
value for the smoothing constant for all the risk factors. And direct estimation of the matrix
using a multivariate GARCH model will be out of the question because the likelihood surface will
not be well defined.

The solution is to apply PCA to the entire system of risk factors, retaining enough components in
the principal component representation to explain most (say, 95%) of the variation. Then,
because they are uncorrelated, it is only the variance of each component that matters – their
covariances are zero.30 So one can treat each component separately, estimating and forecasting
only its variance using either GARCH or EWMA. Note that with this method we do not have to
use the same smoothing constant in each EWMA; and even if we do use the same smoothing
constant, the final risk factor covariance matrix will not have the same effective smoothing
constant for all risk factors, as happens in the RiskMetrics methodology. Nor do we need to

Note that it is only the unconditional covariances that are zero, because PCA is based on an unconditional covariance
30

matrix, which does not have time-varying covariances. Hence the application of a time-varying (i.e. EWMA or
GARCH) model to variances, whilst assuming correlations are constant, does entail a strong assumption.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 39
The PRM Handbook – III.A.3 Advanced VaR Models

constrain the GARCH models in any particular way, as in most multivariate GARCH models.
The large risk factor covariance matrix that we obtain in the end will always be positive semi-
definite. Full details are given in Alexander (2001).

III.A.3.8 Summary
The crucial question for any VaR model is whether the VaR estimate provides a good indication
of the ‘true’ risk in financial markets. Any deviation between the VaR estimate and this truth
represents a model risk. Such a risk is potentially disastrous since it could cause a financial
institution to have insufficient capital given the risks taken, making it more vulnerable than it
should be to insolvency. Concern about this kind of model risk is well justified because financial
markets are prone to periods of high volatility in which extreme market movements can occur.
Many analysts have noted the heavy tails of empirical distributions relative to the standard i.i.d.
normal assumptions. These are a symptom of the clusters of volatility that occur periodically.

Simple VaR models as explained in the previous chapter are all flawed to some extent; either their
assumptions are too simple and/or they suffer from lack of data. This chapter has explored
various methods for improving on VaR models so that they are more consistent with the
behaviour observed empirically in financial markets.

We have argued that the most crucial issue for analysts to address in this regard is volatility
clustering. It is now well established that volatility in financial markets exhibits elevated values
over certain periods before reverting to lower levels. The variation in volatility explains most of
the extreme moves observed in financial markets. We focused on models that incorporate the
volatility clustering idea (EWMA and GARCH) and considered their practical application to VaR
estimation. We also examined some alternative approaches to the issue of extreme returns that
do not incorporate volatility clustering (Student’s t, EVT and normal mixtures).

Of course, it should always be remembered that model risk can never entirely be eliminated. The
advanced models presented here have their own problems, and these should be well understood
by the user. Every model is a simplification of the reality it represents, and should be used with
caution. The pervasiveness of model risk is a key reason why stress tests (covered in the next
chapter) are useful.

This chapter also completed the discussion of VaR for market risk by examining two other, more
advanced topics: risk decomposition and principal component analysis. We have described the
ways in which total VaR can be ‘disaggregated’ into different components, for capital allocation
on a ‘stand-alone’ and on a ‘marginal’ basis, and how traders can use an incremental VaR analysis

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 40
The PRM Handbook – III.A.3 Advanced VaR Models

to evaluate the effect of a proposed trade on their VaR limit. Finally, we have explained how
PCA can greatly simplify VaR estimation when there are multiple risk factors that are highly
correlated. For instance, the VaR for any portfolio of bonds or loans that are mapped to risk-
equivalent cash flows (so that the risk factors are a term structure of interest rates) should be
calculated using PCA. Similarly, commodity portfolios with exposures to futures prices at
different maturities are best represented using VaR with PCA than by a direct VaR analysis. Not
only does this simplify the VaR estimation itself, but the use of PCA greatly facilitates the
application of stress tests and scenario analysis to these types of portfolios, as we shall see in the
next chapter.

References
Alexander, C (2001) Market Models: A Guide to Financial Data Analysis. Chichester: Wiley.
Bauwens, L, Laurent, S, and Rombouts, J (2003) ‘Multivariate GARCH models: a survey’. To
appear in Journal of Applied Econometrics. Available at
http://www.core.ucl.ac.be/econometrics/Bauwens/Papers/papers.htm
Berkowitz, J, and O’Brien, J (2002) ‘How accurate are value-at-risk models at commercial banks?’
The Journal of Finance, LVII, pp. 1093–1111.
Bollerslev, T (1990) ‘Modelling the coherence in short-run nominal exchange rates: A
multivariate Generalised ARCH model’, Review of Economics and Statistics, 72, pp. 498–505.
Dowd, K (2002) Measuring Market Risk. Chichester: Wiley.
Maclachlan, G, and Peel, D (2000) Finite Mixture Models. New York: Wiley.

Copyright © 2004 C. Alexander, E. Sheedy and the Professional Risk Managers’ International Association 41
The PRM Handbook – III.A.4 Stress Testing

III.A.4 Stress Testing


Barry Schachter1

III.A.4.1 Introduction
The previous chapters have introduced value-at-risk (VaR) as a risk measure and discussed
methods to deal with some of its shortcomings. It is common belief that VaR does not provide a
complete picture of portfolio risk and that stress testing is a means of addressing that (at least in
part). While most of this chapter is devoted to questions related to the construction of stress
tests, it is important to step back and formulate some ideas about why we need to have a chapter
on stress testing.

By now the need for stress testing of portfolios of financial instruments is taken for granted. We
find stress testing on every list of risk management best practices. Stress testing has long since
been solemnised by regulators. If this chapter were to be simply a review of stress-testing
techniques, then I could immediately proceed by providing a typology and discussion of the
various techniques that have been presented in the literature on the topic. I think, however, that
an appreciation of the usefulness of stress testing requires that I first step back a bit and ask a
couple of basic questions. The first question is by what historical process we have arrived at this
general acceptance of the need for stress testing. The second question is by what conceptual
framework we have concluded that stress testing is needed.

My definition of stress testing is as follows:


Stress testing (strěs' těst'īng) n.
1. A method for the quantification of potential future extreme, adverse outcomes in a portfolio of financial
instruments.
2. A palliative for the anxiety that is experienced by managers with significant risk exposures.

The key words in this definition are quantification, potential future, extreme and palliative. I am taking a
very broad view of stress testing here to include just about any method for attaching a monetary
figure (i.e., a quantity) to potential losses from extreme events that is not specifically VaR. It is
important to note that quantitative estimates are not necessarily statistical estimates. In fact, most
stress-testing methods, unlike VaR, are not statistical measures of risk. That is, no probabilities
are attached to and no confidence intervals estimated for the adverse outcomes. It is a great

1Chief Risk Officer, Balyasny Asset Management, LLC. Thanks go to Steve Allen and Carl Batlin for reviewing and
commenting on a draft of this chapter. Any remaining errors or omissions are my own. Parts of this chapter draw on
Schachter (1998a, 1998b, 2000a, 2000b).

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 1
The PRM Handbook – III.A.4 Stress Testing

challenge in stress testing, whatever the method employed, from historical scenario analysis
(reliving a past market crisis) to factor push (shifting a market rate to see the impact on a
portfolio), to effectively gauge potential future losses. A palliative is something that reduces pain,
but does not cure an underlying problem. To a large extent, this is the role that stress tests have
served to date, which is somewhat unsatisfying. The risk management profession is still seeking
consensus on theoretically and practically consistent ways to integrate stress testing into decision
making. Uses vary widely from a simple informational tool to a formal element of limit setting
and capital allocation.

The last point above deserves emphasis. Even if we agree what stress testing is, we still need to
agree (or at least understand) its purpose. Stress testing must fit into some wider context, or have
some point. We should be able to establish how stress testing provides some incremental value
as either a means to achieving some goal (e.g., the optimal allocation of assets in a portfolio) – in
other words, it is a direct input in some decision function – or a way of measuring the movement
towards some goal – in other words, it is a benchmark or feedback mechanism for modifying or
improving a decision. If we cannot show either of these things, then we cannot know how to
make rational economic decisions which use stress-test results, and we cannot articulate why we
should be performing stress tests.

It is common belief that stress testing has incremental value because VaR is a sufficient statistic
for risk under only a restricted set of conditions. For instance, in a world where all financial asset
returns are jointly normally distributed and portfolios are passive, the magnitude of a loss of any
probability is a scalar multiple of the VaR. The loss on a portfolio corresponding to a one
standard deviation move in the portfolio value is equal to the 99% VaR divided by 2.33. To
know the VaR is to know everything about risk, and stress tests are not needed. However, if
asset returns are nonlinear functions of underlying market risks, or market risks are not
distributed according to some distribution that is stable under addition, then to know VaR is to
have only an incomplete picture of portfolio risk. Stating it more colloquially, while VaR models
provide a notion of a ‘bad’ portfolio return, they do not convey just how bad ‘bad’ can get. Even
if we agree that stress testing has incremental value, then the context for extracting that value in
decision making still needs to be shown. This chapter, focusing mainly on methods for stress
testing, does not settle this issue.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 2
The PRM Handbook – III.A.4 Stress Testing

III.A.4.2 Historical Context


In the classic 1967 film, The Graduate, Benjamin Braddock (Dustin Hoffman) returns home after
graduation from university, and receives the following sage (and frightening) career advice from a
family friend:

MR MCGUIRE: I just want to say one word to you. Just one word.
BEN: Yes sir.
MR MCGUIRE: Are you listening?
BEN: Yes I am.
MR MCGUIRE: Plastics.
BEN: Exactly how do you mean?
MR MCGUIRE: There’s a great future in plastics. Think about it. Will you think about it?

Perhaps Mr McGuire should have allowed for a second word, too. Derivatives. Or perhaps he
should have substituted ‘derivatives’ and omitted ‘plastics’ entirely. By the end of 1967, work was
already well advanced in the development of a preference-free option pricing formula, based on
the work of Sprenkle, Boness, Samuelson, and Thorp. Merton, Black and Scholes were
eventually to become household names (well, almost) after their revolutionary results were
published in 1973. It is widely acknowledged that financial markets were forever changed by
wide adoption of derivatives as a fundamental tool of risk allocation, as recognition of the
importance of arbitrage-free option pricing spread; see MacKenzie’s (2003) history.

Alas, every boon is also a bane. In the crash of 1987, portfolio insurance strategies, based on
option replication arguments, were given much of the blame. Following the various studies of
the crash, regulatory authorities prescribed additional regulation over the function of equity
markets, in the form of ‘circuit breakers’, to mitigate future risk. Against this backdrop the Basel
Committee on Banking Supervision, in 1988, ushered in a new era of international cooperation in
international financial regulation. And the Committee’s attention immediately turned to
derivatives, motivated by a concern for the soundness and stability of the international financial
system. Also in 1988, the Chicago Mercantile Exchange adopted a system for setting daily
margin requirements, the Standard Portfolio Analysis System (SPAN®), based on a type of stress
test discussed below, scenario analysis. SPAN has since been adopted by many derivatives
exchanges.

In the 1990s a series of corporate financial collapses occurred which were associated (in one way
or another) with derivatives usage. Some of the names are familiar, such as Orange County (see
Jorion, 1995), Procter and Gamble, Gibson Greetings (see Overdahl and Schachter, 1995) and

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 3
The PRM Handbook – III.A.4 Stress Testing

Barings. Others are less familiar, such as Metallgesellschaft (see Culp and Miller, 1999),
Sumitomo (Gilbert, 1996), Daiwa Bank, and SK Securities (see Gay et al., 1999).

As these events unfolded, regulatory authorities sought ways to enhance internal risk controls. In
the series of reports and rule makings that resulted, recommendations for best practice frequently
included a reference to the importance of the role of stress testing for identifying otherwise
hidden risks. It is these recommendations that have pushed stress testing to a relatively
prominent position among risk management tools.

Recommendation 6 of the G-30 report (Global Derivatives Study Group, 1993) states:
‘Dealers should regularly perform simulations to determine how their portfolios would perform
under stress conditions. Simulations of improbable market environments are important in risk
analysis because many assumptions that are valid for normal markets may no longer hold true in
abnormal markets. These simulations should reflect both historical events and future possibilities.
Stress scenarios should include not only abnormally large market swings but also periods of
prolonged inactivity. The tests should consider the effect of price changes on the mid-market
value of the portfolio, as well as changes in the assumptions about the adjustments to mid-market
(such as the impact that decreased liquidity would have on close-out costs). Dealers should
evaluate the results of stress tests and develop contingency plans accordingly.’

The US Comptroller of the Currency in Banking Circular 277 (Comptroller of the Currency,
1993) states:
‘National banks’ ... systems also should facilitate stress testing and enable management to assess
the potential impact of various changes in market factors on earnings and capital. The bank
should evaluate risk exposures under various scenarios that represent a broad range of potential
market movements and corresponding price behaviors and that consider historical and recent
market trends.’

The Basel Committee on Banking Supervision (1994) states:


‘Analysing stress situations, including combinations of market events that could affect the
banking organisation, is also an important aspect of risk measurement. Sound risk measurement
practices include identifying possible events or changes in market behaviour that could have
unfavourable effects on the institution and assessing the ability of the institution to withstand
them. These analyses should consider not only the likelihood of adverse events, reflecting their
probability, but also ‘worst-case’ scenarios. Ideally, such worst-case analysis should be conducted
on an institution-wide basis by taking into account the effect of unusual changes in prices or

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 4
The PRM Handbook – III.A.4 Stress Testing

volatilities, market illiquidity or the default of a large counterparty across both the derivatives and
cash trading portfolios and the loan and funding portfolios.’

The Derivatives Policy Group (1995) included stress testing among the necessary risk
measurement tools of derivatives dealers. Specifically, they state, ‘Mechanisms should be in place
to measure market risk consistent with established risk measurement guidelines. These
procedures should ... provide the information necessary to conduct “stress testing”.’

Concurrent with the implementation of the use of VaR models for the calculation of regulatory
market risk capital as permitted by the EU Capital Adequacy Directive, banks were required to
‘conduct a routine and rigorous programme of stress testing’ (Securities and Futures Authority,
1995).

The Basel Committee on Banking Supervision (1996) eventually made stress testing a prerequisite
for banks to be eligible for the internal models approach to market risk capital. The requirement
had previously been laid out by the Committee in an earlier release (1995). More specifically, the
Basel Committee on Banking Supervision (1995) states:
‘Banks that use the internal models approach for meeting market risk capital requirements must
have in place a rigorous and comprehensive stress testing program. Stress testing to identify
events or influences that could greatly impact banks is a key component of a bank’s assessment
of its capital position. Banks’ stress scenarios need to cover a range of factors that can create
extraordinary losses or gains in trading portfolios, or make the control of risk in those portfolios
very difficult. These factors include low-probability events in all major types of risks, including
the various components of market, credit, and operational risks. Stress scenarios need to shed
light on the impact of such events on positions that display both linear and non-linear price
characteristics (i.e. options and instruments that have options-like characteristics). Banks’ stress
tests should be both of a quantitative and qualitative nature, incorporating both market risk and
liquidity aspects of market disturbances. Quantitative criteria should identify plausible stress
scenarios to which banks could be exposed. Qualitative criteria should emphasize that two major
goals of stress testing are to evaluate the capacity of the bank’s capital to absorb potential large
losses and to identify steps the bank can take to reduce its risk and conserve capital. This
assessment is integral to setting and evaluating the bank’s management strategy and the results of
stress testing should be routinely communicated to senior management and, periodically, to the
bank’s board of directors. Banks should combine the use of supervisory stress scenarios with
stress tests developed by banks themselves to reflect their specific risk characteristics. Specifically,
supervisory authorities may ask banks to provide information on stress testing in three broad
areas, which are discussed in turn below.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 5
The PRM Handbook – III.A.4 Stress Testing

‘(a) Supervisory scenarios requiring no simulations by the bank


‘Banks should have information on the largest losses experienced during the reporting period
available for supervisory review. This loss information could be compared to the level of capital
that results from a bank’s internal measurement system. For example, it could provide
supervisory authorities with a picture of how many days of peak day losses would have been
covered by a given value-at-risk estimate.
‘(b) Scenarios requiring a simulation by the bank
‘Banks should subject their portfolios to a series of simulated stress scenarios and provide
supervisory authorities with the results. These scenarios could include testing the current
portfolio against past periods of significant disturbance, for example, the 1987 equity crash, the
ERM crises of 1992 and 1993 or the fall in bond markets in the first quarter of 1994,
incorporating both the large price movements and the sharp reduction in liquidity associated with
these events. A second type of scenario would evaluate the sensitivity of the bank’s market risk
exposure to changes in the assumptions about volatilities and correlations. Applying this test
would require an evaluation of the historical range of variation for volatilities and correlations
and evaluation of the bank’s current positions against the extreme values of the historical range.
Due consideration should be given to the sharp variation that at times has occurred in a matter of
days in periods of significant market disturbance. The 1987 equity crash, the suspension of the
ERM, or the fall in bond markets in the first quarter of 1994, for example, all involved
correlations within risk factors approaching the extreme values of 1 or –1 for several days at the
height of the disturbance.
‘(c) Scenarios developed by the bank itself to capture the specific characteristics of its portfolio
‘In addition to the scenarios prescribed by supervisory authorities under (a) and (b) above, a bank
should also develop its own stress tests which it identifies as most adverse based on the
characteristics of its portfolio (e.g. problems in a key region of the world combined with a sharp
move in oil prices). Banks should provide supervisory authorities with a description of the
methodology used to identify and carry out the scenarios, as well as with a description of the
results derived from these scenarios.
‘The results should be reviewed periodically by senior management and should be reflected in the
policies and limits set by management and the board of directors. Moreover, if the testing reveals
particular vulnerability to a given set of circumstances, the national authorities would expect the
bank to take prompt steps to manage those risks appropriately (e.g. by hedging against that
outcome or reducing the size of its exposures).’

Another round of calls for stress testing, including recommendations for stress tests of
counterparty credit exposures, followed the LTCM and liquidity crises of 1998. See for example,
the report of the President’s Working Group on Financial Markets (1999). The Basel Committee

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 6
The PRM Handbook – III.A.4 Stress Testing

provided further impetus to the application of stress testing to credit in 2002 as part of the new
Basel Capital Accord (‘Basel II’), stating (Basel Committee on Banking Supervision (2002): ‘Banks
adopting an [internal ratings based] approach to credit risk will be required to perform a
meaningfully conservative credit risk stress test of their own design with the aim of estimating the
extent to which their IRB capital requirements could increase during such a stress scenario.
Banks and supervisors will use the results of such stress tests as a means of ensuring that banks
hold a sufficient capital buffer under Pillar Two of the new Accord.’ Credit stress testing is
discussed in the Credit Risk section of the PRM Handbook.

III.A.4.3 Conceptual Context


As noted in the Introduction, the apparent presumption in all this emphasis on stress testing is
that using stress tests will lead to better decisions with respect to risk taking, either as an integral
component of the decision maker’s objective function, or as a tool measuring the distance from
some goal. What is not at all clear, however, is what are the mechanisms at work here.2 To put
this in the form of a question, if we develop a system of stress testing, what are we then actually
supposed to do with the stress-test results anyway?

Berkowitz (1999) takes up this question.3 It is presumed that stress tests are needed because
there is something lacking in the VaR derived from the firm’s risk model. By ‘risk model’
Berkowitz means the computational process by which VaR is ultimately derived. Stress tests, if
they address this lack, could be thought of as a way of enhancing the assessment of portfolio risk,
specifically by providing a way of correcting VaR. To achieve this correction, stress tests should
be incorporated into the risk model. He argues that there is no reason to support the modelling
of stress tests as a thing separate and apart from the normal process used to evaluate risk, for
then there is no internally consistent way to put stress results to use. (Of course, one could still
view and use stress-test results in any arbitrary way.)

It is easiest to see how this approach to employing stress tests would work by thinking about the
historical simulation approach to VaR estimation (see Chapter III.A.2). In (the basic application
of) historical simulation, each day of history is assigned an equal probability in the forecast
portfolio return distribution. A set of stress scenarios may then be added to the history, each
scenario weighing equally with each day of history. The resulting ‘corrected’ simulation is then
used in evaluating the usual desired percentile for the calculation of VaR, now the ‘corrected’

2See Shaw (1997) for a critique of stress tests.


3 See also Zangari (1997a, 1997b) and Cherubini and Della Longa (1999), who apply the Black–Litterman/Bayesian
approach to incorporate stress scenarios into the VaR framework.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 7
The PRM Handbook – III.A.4 Stress Testing

VaR. The risk model employed by Algorithmics, the enterprise-wide risk management software
company, follows a similar approach.

This is a powerful and attractive argument for placing stress tests on a sound footing for use in
risk management. The key prerequisite for accepting this approach is a willingness to assign to
each stress-test scenario a subjective probability (it is not necessary to assume equal probabilities).
While putting stress testing on a solid foundation, this approach is not a panacea for the risk
manager. Even were this approach to be adopted, the risk manager would still be left to deal
with the anxiety and doubt over including inappropriate scenarios and excluding overlooked
scenarios. Interestingly, too, the Basel Committee’s risk-based capital rules may create a
disincentive for banks to adopt this integrated approach in favour of a fuzzier application of
stress testing. Integrating stress tests with the basic risk model will result in an increase in
measured VaR, which will result in an increase in regulatory capital. To the extent that capital
requirements are a binding constraint on the bank (or nearly so), the bank will incur some real
incremental economic costs using this approach.

Is that it, then? Is that the only application of stress testing that is sensible? Well, no, even if it is
the most theoretically comforting (i.e., internally consistent) application. As a risk yardstick,
stress tests also provide information to decision makers about risk taking in relation to risk
appetite. For this reason some institutions set stress-test limits at the enterprise level. Others use
stress-test results to measure capital usage or assess capital charges. Less formally, many
institutions rely on stress-tests results as a means for the decision maker to perform an intuitive
check on his or her comfort with the set of risks in the portfolio. In this sense, stress tests can
provide a ‘reality check’ or a way of assessing model risk (e.g., the sensitivity of valuation to
parameter estimates or inputs) where complex models are used in risk assessment. This use of
stress testing is long-established practice in the credit world. Regulators also view stress tests as a
way to assess their own comfort level with the risks being run individually by institutions for
which they are responsible, and more recently also to check their comfort level with the systemic
risks implied by the collective positions of the same institutions.

III.A.4.4 Stress Testing in Practice


Most of the information available on current practice was obtained from a survey of 43 large
financial institutions conducted by a task force of G-10 central banks established by the
Committee on the Global Financial System (2001). The results were also summarised by Fender
et al. (2001) and Fender and Gibson (2001a, 2001b). The survey asked risk managers to list the
most important stress scenarios used firm-wide.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 8
The PRM Handbook – III.A.4 Stress Testing

The task force identified nine stress-scenario themes:


x four themes related to asset class (e.g., stress tests on commodity indices);
x four themes related to geographic region (e.g., stress tests on exposures to emerging
markets).

The most common scenarios were the 1987 stock market crash, widening of credit spreads, and
various versions of a hypothetical stock market crash. Among interest-rate themed scenarios, the
bond market crash of 1994 was the most common. In the emerging markets theme, an Asian
crisis was the most common. US dollar weakness/strength scenarios were the most common
among the remaining geographic themed scenarios. Commodity themed scenarios focused most
often on a potential Middle East crisis. Stress tests, other than those centred on foreign
exchange, tended to reflect the predominance of long exposures in the respondents’ portfolios
(e.g., institutions conducted more spread widening than narrowing scenarios).

Banks are not terribly keen to disclose much about their stress-testing approach. Below are
produced extracts from the 2003 annual reports of the three largest US banks (by assets).

Citigroup’s report states: ‘Stress testing is performed on trading portfolios on a regular basis ... on
individual trading portfolios, as well as on aggregations of portfolios and businesses, as
appropriate. It is the responsibility of independent market risk management, in conjunction with
the businesses, to develop stress scenarios, review the output of periodic stress testing exercises,
and utilize the information to make judgments as to the ongoing appropriateness of exposure
levels and limits.’

Bank of America states: ‘[S]tress scenarios are run regularly against the trading portfolio to verify
that, even under extreme market moves, we will preserve our capital; to determine the effects of
significant historical events; and to determine the effects of specific, extreme hypothetical, but
plausible events. The results of the stress scenarios are calculated daily and reported to senior
management … .’

JP Morgan Chase, which typically provides the most comprehensive disclosures, states: ‘Stress
testing ... is used for monitoring limits, cross business risk measurement and economic capital
allocation. ... The Firm conducts economic value stress tests for both its trading and its non-
trading activities, using the same scenarios for both. The Firm stress tests its portfolios at least
once a month using multiple scenarios. Several macroeconomic event-related scenarios are
evaluated across the Firm, with shocks to roughly 10,000 market prices specified for each
scenario. Additional scenarios focus on the risks predominant in individual business segments

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 9
The PRM Handbook – III.A.4 Stress Testing

and include scenarios that focus on the potential for adverse moves in complex portfolios.
Scenarios are continually reviewed and updated to reflect changes in the Firm’s risk profile and
economic events. Stress-test results, trends and explanations are provided each month to the
Firm’s senior management and to the lines of business, to help them better measure and manage
risks and to understand event risk-sensitive positions. The Firm’s stress-test methodology
assumes that, during an actual stress event, no management action would be taken to change the
risk profile of portfolios. This assumption captures the decreased liquidity that often occurs with
abnormal markets and results, in the Firm’s view, in a conservative stress-test result. … [S]tress-
test losses are calculated at varying dates each month. … The following table represents the worst-
case potential economic value stress-test loss (pre-tax) in the Firm’s trading portfolio as predicted by
stress-test scenarios:

Trading Economic-Value Stress-Test Loss Results – Pre-Tax


as of or for the year ended December 31
Loss in USD $m
2003 2002(A)
AVE. MIN. MAX. DEC. 4 AVE. MIN. MAX. DEC. 5
508 255 888 436 405 103 715 219
(A) Amounts have been revised to reflect the reclassification of certain mortgage banking positions
from the trading portfolio to the non-trading portfolio.

‘The potential stress-test loss as of December 4, 2003, is the result of the “Equity Market
Collapse” stress scenario, which is broadly modeled on the events of October 1987. Under this
scenario, global equity markets suffer a sharp reversal after a long sustained rally; equity prices
decline globally; volatilities for equities, interest rates and credit products increase dramatically for
short maturities and less so for longer maturities; sovereign bond yields decline moderately; and
swap spreads and credit spreads widen moderately.’

III.A.4.5 Approaches to Stress Testing: An Overview


The definition and practice of stress testing encompass several different techniques (see Table
III.A.4.1 for a high level overview). The choice of which test to employ in a particular institution
and situation is driven by several factors. Regulatory requirements are important, of course.
Also important is the cost, in time and resources, needed to generate stress-test results. The
choice also reflects the specific needs of the users. These needs depend on the complexity of the

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 10
The PRM Handbook – III.A.4 Stress Testing

portfolio, the frequency with which it is traded, the liquidity of the instruments in the portfolio,
the volatility of the markets in which the instruments are traded, and the strategies employed.4

Table III.A.4.1: Typology of stress tests


Approach Description Pros Cons

Historical Replay crisis event It actually happened that Proxy shocks may be numerous
scenarios way
No probabilistic interpretation

No guarantee of ‘worst case’

Hypothetical 1. Covariance matrix 1. Relatively easy 1. Empirical support mixed


scenarios
2. Create event 2. Very flexible 2. No guarantee of ‘worst case’

3. Sensitivity analysis 3. Can be detailed 3. Limited risk information

Algorithmic 1. Factor push 1. Minimal qualitative 1. No guarantee of ‘worst case’


elements
2. Maximum loss 1. Ignores correlations
2. Identifies ‘worst case’ in
feasible set (maybe) 2. Assumes data from normal periods
are relevant

2. Computationally intensive

Most of the regulatory attention has focused on stress testing at the portfolio level. For the
regulators it is the aggregated impact of stressed market environments that poses risks that
interest them. For some time international organisations have pursued the idea of aggregating
the results of standardised stress scenarios for individual financial firms into financial sector
stress tests to attempt to estimate the economy-wide impact of crisis events (e.g., the Committee
on the Global Financial System of the Bank for International Settlements, and the Financial
Sector Assessment Program of the IMF and World Bank). Stress testing of a sort is widely
practised at the individual trading desk, trader and position levels, too. To the extent that trading
desks already perform their own position-by-position stress testing, we have to ask why we
cannot merely aggregate those results for the purpose of examining exposure at the portfolio
level. Desk-level stress tests are (as they should be) highly focused on the specific risks being
run at the desk. Perhaps much of the information generated is not relevant at the aggregate
portfolio level. Equally important, risks that are deemed to be of ‘second order’ at the trading
desk, and hence are subject to little or no stress testing, may be important (at the margin at least)

4 The discussion that follows pertains most directly to stress testing for traded instruments. Stress testing either the

banking book or a non-financial firm’s exposures presents additional challenges and needs. Approaches specific to
those needs have been developed (e.g., net interest income stress tests and economic value added stress tests), but they
are outside the scope of this discussion.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 11
The PRM Handbook – III.A.4 Stress Testing

when viewed in the context of risks being taken at other desks. Also, desk-level tests are used for
evaluating and actively managing risk position by position (or at least strategy by strategy),
possibly in near real time. Portfolio stress testing serves a more strategic purpose in the
identification and control of event-related risk, and the most useful metrics for the one purpose
need not also serve best for the other. Desk-level stress tests tend to be more of the factor-push
variety, whereas portfolio stress tests tend to employ the scenario approach.

Irrespective of whether the stress-testing programme is intended for desk level or portfolio level
risk management, useful stress tests require full revaluation for all positions with nonlinear or
discontinuous payouts.5 Perhaps it is foolish to state the obvious. However, full revaluation can
entail significant computational and time costs, and it is tempting to trade off accuracy for speed.
But, as the raison d’être of stress testing is to explore portfolio losses in crises where nonlinearities
and discontinuities may expose hidden risks, approximate revaluation approaches should be
strictly limited. It may be necessary to develop alternative revaluation strategies to overcome
technology constraints that limit the employment of full revaluation for other risk management
purposes. The only exception to this requirement is for some desk-level stress testing. Here
stress testing of sensitivity-based risk exposure information (deltas, vegas, etc.) is useful for
tactical portfolio management decisions as well as being perhaps the only practical way of
delivering stress results on demand or in real time.

III.A.4.6 Historical Scenarios


Historical scenarios, as a variety of stress testing, seek to quantify potential losses based on re-
enacting a particular historical market event of significance. Scenario shocks that determine the
impact on portfolio valuation are taken from observed historical events in the financial markets.
This is in contrast to stress tests where shocks are based, for example, on specified changes in the
covariance matrix of asset returns, or on shifting prices or rates by an arbitrary number of
standard deviations. We are all empiricists when we say, ‘it is reasonable, because it actually
happened’. Historical scenario stress testing is required by the Basel Committee, prescribing that
‘scenarios could6 include testing the current portfolio against past periods of significant
disturbance, for example, the 1987 equity crash, the ERM crises of 1992 and 1993 or the fall in
bond markets in the first quarter of 1994, incorporating both the large price movements and the
sharp reduction in liquidity associated with these events’ (Basel Committee on Banking
Supervision, 1996).

5 Discontinuities can present a challenge for stress tests of all types, except perhaps in principle ‘maximum loss’. It

may be prudent to construct stress tests with the points of discontinuity specifically in mind.
6 With regulators, to say ‘could’ is to say ‘do this unless you have some compelling reason not to’.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 12
The PRM Handbook – III.A.4 Stress Testing

It sounds so easy. Designate a period in history with a suitable crisis environment and find out
what would happen to the current portfolio if that crisis were replicated today. However, as is
the case with so many things, it really is necessary to ‘sweat the details’. I will consider the
elements of historical stress testing in turn, namely, choice of a historical period and specifying
shocks.

III.A.4.6.1 Choosing Event Periods


The first question in choosing a historical period for stress testing is which periods to choose. A
historical event may be defined in one of two ways. In the first, the event is defined relative to a
well-known crisis period, such as the Asian crisis of 1997. In the second, the event is defined by
examining the historical record of moves in market risk factors relative to some user-defined
threshold level of shocks. The second approach will no doubt also turn up events that
correspond to most well-known crises, but may identify other event periods as well, depending
on the particular risk factors whose histories are being scanned for large movements. The former
approach is more prevalent.

Some potential scenarios were mentioned in previous sections. Common candidates for
historical stress tests include the following: the US stock market crash of 1987, the European
exchange-rate mechanism crisis of 1992, the US bond market sell-off of 1994, the Mexican peso
crisis of 1994, the so-called Asian crisis of 1997, the Russian default of 1998, and the LTCM and
liquidity crises of 1998. The attacks of 11 September 2001 also constitute a candidate.7 Davis
(2003) creates a typology of financial crises. Systematic characterisation of market crises may be
useful both for ensuring that the set of events employed in a stress-testing programme contains a
reasonable portion of the spectrum of possible crises, and for guiding the construction of
hypothetical scenarios (discussed later).

The next question is how many events should be part of the stress-testing programme. No
matter how many historical periods are selected, it is not possible to guarantee that the
prospective worst-case scenario is covered. It can be hoped, however, that a judicious selection
of scenarios will provide indicative information about areas of vulnerability, especially in
identifying risks that may not be obvious from other risk measurements, such as VaR. Ideally,
the risk manager will vary the number and variety of historical scenarios evaluated through time
depending on both the changing composition of the portfolio and the changing economic
environment.

7 See, for example, Malz and Mina (2001), Hickman and Jameson (2001) and Jorion (2002).

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 13
The PRM Handbook – III.A.4 Stress Testing

The first thing one realises when looking at a historical event candidate for a stress scenario, say
the liquidity crisis of 1998, is that the start and end dates of the event are not always obvious.
For any particular market rate, the period of interest is likely to be unambiguous. However, the
more complex and varied the instruments in the portfolio to be stressed, the more difficult is the
problem of identifying the start and end date for the stress event. Two approaches are possible.
x Define the event interval, making sure that the interval selected encompasses all (or
essentially all) of the significant moves in individual market rates. Then use as the shock
magnitude for each risk factor the greatest change in that factor (e.g., peak-to-trough)
found within the interval, regardless of the start or end date. The advantage of this is
that the scenario will entail the largest possible moves in each risk factor. The
disadvantage is that the shocks, when taken together, may make no economic sense.
x Define the event interval such that it comes as close as is possible to capturing exactly
the greatest moves in the factors of most interest. That is, the peaks (or troughs) occur
at the start date and the troughs (or peaks) occur at the end date. Then use as the shock
magnitude for each risk factor the change in that factor from the start date to the end
date. The ideal event window cannot be achieved in practice. That is the disadvantage
of this. The advantage is that the scenario has the potential to be economically
meaningful.
The second approach is preferred, as a key element in establishing the plausibility of a scenario is
that the shocks, taken together, must be sensible.

Historical scenarios rarely play out within a single trading day. Given international market
linkages observed through contagion and feedback effects, even an event sharply focused in time
is likely to engender after-shocks that continue for a few days. More commonly, an event will
develop over a period of a few days or even weeks, as in the liquidity crisis of 1998. As a result
the specification of historical events raises questions about how the passage of time is affecting
the test results. Two areas of concern come to mind: trading or hedging out risks, and modelling
the effect of time passing on expiration, maturity and ‘carry’ (e.g., for fixed income or options
positions).
x No matter how illiquid a market, there is a price at which a portfolio manager can trade
out of a position. Since those costs may be prohibitive, it is common to assume in
historical stress scenarios that positions cannot be traded or hedged – no matter how
long the interval of calendar time spanned by the scenario. It is common to argue (at
least, I have argued) that this assumption, while extreme, approximates a worst-case
scenario in which illiquidity is extreme. Still, it stretches the plausibility requirement to
apply this assumption to a very extended stress period (as traders will readily argue).

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 14
The PRM Handbook – III.A.4 Stress Testing

x Another question that arises with historical scenarios is how to model the impact of the
passage of time on instruments in the portfolio whose values depend on time.
Instruments that are affected include futures and forwards, bonds and options. It is
possible to argue that the scenario telescopes the historical record into a single trading
day, thereby making it unnecessary to deal with the passage of time, and this is the
prevalent approach. However, by assuming that time is telescoped, the effects of
illiquidity on the portfolio are muddled somewhat, and the plausibility of such assumed
one-day moves in rates can be questioned. If explicitly allowing for the passage of time,
the expiration of options and the cash flows from payouts, if any, need to be
incorporated into the scenario. Similarly, bond coupons and repo payments should be
considered. Allowance must be made, too, for rolling of forwards and futures.

III.A.4.6.2 Specifying Shock Factors


A fundamental element in specifying shocks for a historical scenario is the choice of relative (or
proportional) versus absolute (or additive) shocks. Consider the following example. Suppose
that the GBP/USD exchange rate is currently 1.8. Also assume that during the period of
historical interest the rate moved from 1.5 at the beginning of the event to 1.75 at the conclusion
of the event. The absolute change in the exchange rate during the event was 0.25, and the
relative change was 16.7% (appreciation of the GBP). A decision must be made whether to
calculate the shocked exchange rate as 1.8 + 0.25 = 2.05 or 1.8 × 1.167 = 2.1. This issue arises in
VaR modelling as well, but because the changes in market risk factors are generally larger in stress
testing, the effects can be more dramatic. If the levels of market rates are similar between the
beginning of the historical event and the current stress-test ‘as of’ date, then the choice is less
important. However, this will not generally be the case.

Relative shocks are generally preferred for a couple of reasons. Firstly, when applying a relative
shock one will not inadvertently cause a rate to change sign. Secondly, a relative shock (generally)
corresponds directly to the rate of return on a portfolio, which is (generally) thought to be the
parameter of interest in an individual’s utility function. Nevertheless, relative shocks are not
always appropriate. Some market spreads can be either positive or negative. To maintain that
property in a stress scenario, absolute shocks need to be applied. Applying relative shocks to
interest rates can be a problem as well. For example, a relative interest-rate shock that is derived
from a historical period when interest rates are very low may imply unrealistic moves in rates
when they are at higher levels. As a general rule, then, most shocks should be relative. However,
shocks to interest rates generally should be absolute. Shocks to volatility should generally be
relative, in part because volatilities cannot be negative. Exceptions to these rules should be made
on a rate-by-rate basis where it is appropriate to do so.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 15
The PRM Handbook – III.A.4 Stress Testing

An issue arises when the portfolio’s fixed-income instruments are priced from both zero curves
and par curves, perhaps as a result of the way different trading desks prefer to view their risks, or
when different portfolios are marked using different back-office systems. In this case,
consistency between the historical shocks for par curves and zero curves must be imposed.

Applying historical interest-rate shocks to current yield curves requires special attention as well.
Years of research (and probably quite a few doctoral theses) have shown that typically three
factors can explain about 95% of the movements in yield curves, commonly interpreted as level,
slope and curvature. Ideally, it would be the shocks arising from those three key yield curve
factors that are used in scenario construction, perhaps through principal components analysis
(PCA) as described in Frye (1997). If historical (absolute) changes at various points on the yield
curve are applied point-by-point to the current yield curve, the problem that can arise is that the
‘shocked’ yield curve can take on some very implausible shapes. It is a good idea, at the very least,
to monitor the ‘look’ of shocked curves as part of the stress-test process.

PCA is useful for generating realistic ‘shocked’ yield curves in a tractable manner. Firstly,
dimensions are reduced so that only the three key risk factors need to be shocked. Secondly,
these factors are orthogonal, so they can be shocked independently and the shocked result will be
a realistic curve. This technique is equally important when the scenario specifies shocks to any
term structure, such as the term structure of volatility, a term structure of commodity futures of
different maturities and a term structure of foreign exchange rates. If PCA is not applied it may
be necessary to modify historical shocks that give rise to implausible curve shapes, perhaps by
applying the Cholesky matrix (derived from the historical covariance matrix) to the independent
shocks, thus making them correlated (see Section II.D.4).

III.A.4.6.3 Missing Shock Factors


In many instances it is not possible to refer to the historical record for a particular instrument
when specifying stress shocks. The more distant in the past is the historical scenario, the more
likely this is to be a problem. In some cases instruments in the current portfolio simply were not
traded during the historical period. For example, default swaps were not traded at the time of the
Mexican peso crisis. In other cases, even if instruments were traded, there is no reliable source
for historical data; either the data are bad or there are no data. It is necessary to have a policy for
dealing with ‘missing’ shock factors.

A simple rule to follow is not to leave any current positions without an assigned historical shock
unless a zero shock is the best guess at what that shock would have been. If positions in
particular instruments are deemed to be immaterial at the time of the specification of the

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 16
The PRM Handbook – III.A.4 Stress Testing

scenario, the situation may subsequently change, and the specification of the scenario would then
need to be revisited. If stress scenarios become part of the risk limit structure or capital
allocation or performance evaluation processes, leaving positions without shocks not only results
in an inferior decision-making process but also creates incentives for strategic behaviour that may
not be desirable from the perspective of risk appetite.

Assumptions about correlations play a big role in following the leave-no-position-unshocked rule,
as the best guess is usually obtained from examining historical shocks of instruments thought to
be highly correlated with the position in need of a shock assignment. There are two basic
approaches to ensuring this: employing proxies or using interpolation.

x In the case of proxies, a shock is assigned from another instrument. Equities are a good
example. With equities, mergers, spin-offs, and changes in business focus may make this
existing historical record irrelevant, at least if plausibility is to be maintained. In this case
it may be prudent to at least ensure that changes in a company’s industry classification
are noted. Then a policy decision can be made whether to ignore the history and
instead proxy shocks for such equities to a historical industry-specific shock factor.
When assigning proxies, it is useful to employ more rather than fewer proxies (equity
sector proxies, rather than just a single market proxy), in order to obtain a better-
articulated result. More sophisticated data filling approaches than discussed here are
possible as well, of course.

x Interpolation (or extrapolation) may be appropriate in the case of fixed-income


instruments. For example, swap and forward foreign exchange markets tend to both
become more liquid and extend over time as comfort increases in assessing the longer-
term risks. This filling in and out of the term structure is especially noticeable in
emerging markets. In part because of the correlation between instruments of different
tenors and in part because of the structure of implied forward rates, it is reasonable to
interpolate rate shocks (usually linearly) from available historical shocks at adjacent
points in the term structure.

It should be clear that even when using historical scenarios for stress testing, scenario creation is
not a once- only event. Not only should new scenarios be constantly under consideration for
development, but even existing scenarios need to be constantly re-evaluated and sometimes
tweaked to maintain their usefulness. This can be tedious and unexciting, so it is a good idea to
establish a policy to formally review stress scenarios periodically to assist in establishing a good
discipline.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 17
The PRM Handbook – III.A.4 Stress Testing

III.A.4.7 Hypothetical Scenarios


History does not conveniently present the risk manager with a template for every plausible future
market crisis (though the sample size of crises does keep increasing with time). For this reason, it
may be desirable to create a hypothetical economic scenario as a stress test. Ideally, a
hypothetical scenario is based on a structural model of the global financial markets (perhaps with
a ‘real’ or physical goods and services component, too), in which the specification of a
parsimonious set of market shocks provided as inputs to the model will result in a complete
specification of responses in all markets. Well, in most cases that is not going to happen.8 Still, it
is good to keep that ideal in mind when constructing an economic scenario, because it is very
easy to make a bad scenario by ignoring cause, effect and co-determination in economic
relationships.

III.A.4.7.1 Modifying the Covariance Matrix


Some argue that the key feature of a stress event is embedded in the behaviour of asset
correlations. The intuition is strong. In a crisis, investors may make fewer distinctions among
assets and issue blanket buy or sell orders in a flight to safety that tends to drive whole classes of
assets in the same direction. Or interconnections between market participants may manifest in a
crisis where the actions of one agent create the need for other agents to take similar actions. Or
interconnections between markets may manifest when an agent, faced with a liquidity crisis in
one market, attempts to liquidate positions in other markets, precipitating a liquidity crisis
throughout the system. A stress test can be constructed from a modified covariance matrix in
several ways. For example, if it is assumed that asset returns are jointly normally distributed, then
the stress portfolio valuation can be obtained by computing the monetary value of a one standard
deviation change in the portfolio value (using the modified covariance matrix) and scaling up the
result by the desired number of standard deviations (a common multiplier is 4).

As with any intuition, it is prudent to test it against the available data. Boyer et al. (1999) show
that careless data mining can lead one to conclude incorrectly that correlations are different in
stressful markets. However, they acknowledge that there is evidence that correlations do change.
Taking Boyer et al.’s points into account, Kim and Finger (2000) conduct an empirical study from
which they conclude that that data provide considerable support for the existence of a separate
stressful market environment with distinct asset correlations. In particular, they propose that
observed asset returns are generated by a mixture of normal return-generating processes, one for
an ordinary market environment and one for a stressful market environment. Nevertheless,

8 However, that is exactly the approach taken by UK regulatory authorities who employed a macroeconomic model in
their experimentation with macroeconomic stress tests. Hoggarth and Whitley (2003) present a very interesting
discussion of the issues involved.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 18
The PRM Handbook – III.A.4 Stress Testing

Loretan and English (2001) argue that the empirical evidence might not support correlation
breakdown, but rather be the residual of time-varying volatility. Similarly, Forbes and Rigobon
(2002) argue that, taking into account the apparent relation between correlation and volatility,
they are unable to find evidence of changes in correlations during the 1997 Asian crisis, the 1994
Mexican peso crisis, or the 1987 US stock market crash. Then again, Dungey and Zhumabekova
(2001) and Corsetti et al. (2002) say that the results in Forbes and Rigobon (2002) may be
overstated, first because the number of crisis periods in the sample is small, and second because
their econometric specification is too restrictive.

In sum, despite the intuition, the empirical evidence is not uniformly supportive of the notion
that correlations increase in crisis situations. Still, the force of intuition is strong, and it is
common to construct stress scenarios with increases in correlation. Note, however, that
increased correlation does not in itself guarantee to stress a portfolio. It is simple to construct
thought experiments in which the VaR of a portfolio will decline with increases in correlation.

It is not advisable to change correlations by arbitrarily setting selected correlations to 0, 1 or –1,


because implausible stressed portfolio returns can result (specifically, covariance matrices that are
not positive semi-definite, meaning that some portfolios could have negative variance!). As a
result, any stress scenario that involves causing correlations to differ from the relationships
embedded in the historical data that were used to estimated them must follow certain rules.

When changing the correlation between two risk factors, it is important to understand that the
correlations can only change in reality if the underlying returns on the two factors change relative
to each other (possibly in such a way that the average return and the variance of the two do not
change). If those underlying returns change, then by implication every correlation between each
of those two risk factors and the remaining risk factors may change, too. Thinking in terms of
the correlation matrix, if we want to change the value of one correlation, which means changing
the value in two cells of the matrix (from the symmetry property), then all the correlations in the
rows and columns intersecting those two cells can change as well. Generally, many possible pairs
of altered return vectors will yield any desired correlation (given average returns and variances).
By specifying the method to (explicitly or implicitly) adjust the return vectors, it will be possible
to determine the corresponding induced changes to the other affected correlations that are
necessary to maintain consistency.

Finger (1997) proposes modifying the return vectors of the risk factors whose correlations are to
be modified such that the return on each factor on any day, t, is a linear combination of the
historically observed return of that factor on day t and the average of the returns on the affected

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 19
The PRM Handbook – III.A.4 Stress Testing

market factors on day t. The transformed return vectors will need to be rescaled if the original
individual variances are to be unchanged. Note that shocking individual asset variances can be
incorporated into this method as well. Further, an adjustment may also be made to ensure the
mean returns are unchanged, but this is not necessary if the mean returns are ignored (i.e.,
assumed to equal zero) for purposes of the risk calculations.

Table III.A.4.2 contains statistics derived from the logarithms of the daily changes in closing
prices (in New York) for IBM, GE and MSFT for one year ending on 20 April 2004. The upper
triangular (red) entries are correlations;9 the remaining entries are variances and covariances (all
annualised from the daily data).

Table III.A.4.2: Estimated correlations and annualised variances and covariances


IBM GE MSFT
IBM 0.038 0.411 0.558
GE 0.016 0.041 0.474
MSFT 0.025 0.022 0.054

In this example, the correlation between IBM and MSFT will be increased to 0.850 (from 0.558)
using Finger’s methodology. Table III.A.4.3 illustrates the operations that are performed on the
IBM and MSFT return vectors.

Table III.A.4.3: Modification of returns on a representative date


Date Return Modified Return Normalised Return

0.005117  0.0004 Ƴ IBM


4 R (IBM) 0.005117 R (IBM mod ) ƨ u  (1  ƨ) u 0.005117 R (IBM mod ) u
2 Ƴ IBMmod
June
0.005117  0.0004 Ƴ MSFT
2003 R ( MSFT) 0.0004 MSFTmod ƨu  (1  ƨ) u 0.0004 R ( MSFTmod ) u
2 Ƴ MSFTmod

The parameter ƨ determines the modified correlation between the two return series. For any
choice of ƨ, returns for every date are modified as illustrated for the representative date in the
table. If ƨ = 1, then the two time series will be perfectly correlated. A simple numerical search
(e.g., with Solver in MS Excel) can be used to identify the value of ƨ that results in a correlation
equal to the target level of 0.850. For the data used here, the desired correlation is obtained with
ƨ = 0.4625. Table III.A.4.4 shows the final modified correlations and covariances for these three
equities.

9 For example, Corr(IBM, GE) = Cov(IBM, GE)/{Var(IBM) × Var(GE)}1/2 = 0.016/{0.038 × 0.041}1/2 = 0.411.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 20
The PRM Handbook – III.A.4 Stress Testing

As before, the entries above the diagonal are correlations, and the remaining entries are variances
and covariances. As should be expected, given the methodology, the correlations between GE
and both IBM and MSFT have been affected by changing the correlation between IBM and
MSFT.

Table III.A.4.4: Modified correlations and annualised variances and covariances


IBM GE MSFT
IBM 0.038 0.470 0.850
GE 0.018 0.041 0.498
MSFT 0.038 0.023 0.054

When several instrument pairs are chosen to have their correlations fixed at a level different from
the historical correlation, this approach in general requires computing a separate ƨ for each of
those pairs of instruments. In this case, because of the interdependencies among the risk factors,
it will be necessary, in general again, to solve simultaneously for the set of ƨs that together yield
the desired set of correlations.

More general techniques for modifying correlations in a consistent manner have been suggested;
see, for example, Kupiec (1998), Rebonato and Jäckel (1999), Higham (2002), or Turkay et al.
(2003). These approaches focus explicitly on eliminating the negative eigenvalues of the stressed
correlation matrix, and seek to identify the consistent matrix that is ‘closest’ to the stressed matrix
(according to some metric).

III.A.4.7.2 Specifying Factor Shocks (to ‘create’ an event)


Rather than creating a stress test through modification of the covariance matrix, it is possible to
create a hypothetical scenario simply by specifying hypothetical shocks to the market factors.
Without an economic model, it is a daunting task to attempt to describe a coherent set of
hypothetical shocks encompassing every market risk factor. Thus, even for a hypothetical
economic scenario, actual historical behaviour of market prices can provide useful guidance for
the specification of plausible shocks.

Another element in specifying shocks is to specify which no-arbitrage relationships are to hold in
the scenario. ‘Arbitrage’ is a term that is used very loosely, so much so that a variation has come
into use, pure arbitrage. A pure arbitrage is an opportunity for a riskless and certain profit. Pure
arbitrage is achieved through the implementation of a self-financing ‘replicating portfolio’ (or
exact hedging strategy). Speaking somewhat loosely, instruments that are (in principle) tied by

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 21
The PRM Handbook – III.A.4 Stress Testing

this type of arbitrage relationship include simple European options and their underlying assets,
forwards/futures and the corresponding cash market instrument, and relative exchange rates.
The scenario designer must decide which relationships are to be fixed in the scenario. In part,
this decision is helped by observing what happened in the historical record. In the case of the
relationship between futures and cash equities, it has been observed that the parity relationship
did not hold continuously through the 1987 market crash. Nevertheless, if desired, the futures–
cash relationship can be enforced in scenario construction simply by defining the futures price to
be fairly valued relative to the shocked index level, forgoing the implementation of a separate
historical shock for the futures.

Currency traders sometimes will make large bets on the actions of governments that either have
pegged or are managing the float of their exchange rate. For this reason, it is important when
constructing a hypothetical economic scenario to make a conscious decision about what to do
with pegged currencies. Depending on the scenario, it may be appropriate to assume that certain
pegs are broken. The shocks that are assumed (including spill-over effects), then, may depend on
historical cases when other currency pegs were broken, or on expectations of the future exchange
rate implied in non-deliverable currency forwards.

An example of a hypothetical economic scenario is the ‘commodity themed’ Middle East crisis
scenario common among banks, as noted in Section III.A.4.2. It may be assumed in such a
scenario that the outbreak of war results in a disruption of oil production. Some spike in crude
prices and volatility must then be assumed. The impact on related energy products prices is then
estimated. However, the impacts do not end there, as such an event is likely to modify investor
expectations of future inflationary impacts, lead to possible shortages in certain markets, and
bring about pre-emptive central bank responses, etc., all of which will affect asset prices and
volatilities. For all of these effects factor shocks must be assumed in a way that creates a
plausible and coherent picture of the impacts on various markets. These shocks to market prices
and rates are then applied to portfolio positions to evaluate potential exposure to the hypothetical
event.

III.A.4.7.3 Systemic Events and Stress-Testing Liquidity


Another hypothetical economic scenario that is of great interest is a systemic liquidity event. The
stress tests discussed above do not probe vulnerabilities arising from the interrelationships
among institutions. The voluntary withdrawal of a derivatives dealer from the market is a
commonly cited event of this type; see, for example, Greenspan (2003) and Jeffery (2003). Since
the LTCM and liquidity crises of 1998, regulators and some money centre banks have shown
increasing interest in the both immediate and the follow-on effects of such an event. The focus

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 22
The PRM Handbook – III.A.4 Stress Testing

of this interest is on the mechanisms that tie together institutions and provide channels for
contagion and feedback. Borio (2000) writes that, ‘for a proper understanding of liquidity under
severe stress, the interaction of basic order imbalances with cash liquidity constraints and
counterparty risk needs to be explained. Leverage and risk management play a key role. It also
suggests that some factors that may contribute to liquidity in normal times can actually make it
more vulnerable under stress.’

Consider the following features of the financial system, all of which are generally regarded as
improving stability and efficiency in normal markets. Firstly, collateral requirements for over-the-
counter derivatives and performance bonds on exchange-traded derivative transactions reduce
the likelihood of default in normal times. However, an organisation that trades these may plan to
have liquidity sufficient for ‘normal’ daily mark-to-market contingencies and long-term average
liquidity needs, but may still find itself unable to post the collateral as required in a significant
market event. Failure to do so requires the counterparty absorb a portion of the loss. This loss
may, in turn, force the counterparty to fail to make required payments on its obligations, creating
a contagion of defaults.

Secondly, risk management policies, such as risk limits, will moderate risk taking to tolerable
levels in normal times. However, where a market event causes a sharp increase in measured risk,
traders may choose (or be directed) to exit positions to avoid triggering risk limits. The resulting
trades may contribute further volatility, and further knock-on effects. This likelihood is greater
if large market participants measure and limit risk taking in similar ways, or risk managers at large
firms just respond to market events in similar ways. Then these individual responses will be
exacerbated at the level of the financial system as a whole.

Thirdly, position transparency and risk disclosures, as well as advances in information technology,
generally promote efficient price discovery and fair market pricing. However, they may also
contribute to ‘herd’ behaviour, in which large market players have similar trades, and in which
market participants react similarly and at simultaneously in response to an event. This behaviour
can exacerbate the initial effects of an event.

Fourthly, a linchpin of efficient functioning of the over-the-counter derivatives market is the


ability of dealers to control portfolio risk exposures through the construction of synthetic hedges
(replicating the desired set of risk characteristics through another portfolio of instruments).
Illiquid sectors of the market rely on the instruments traded in the more liquid sectors to create
hedges cheaply. A reduction in liquidity in the liquid sectors can significantly affect the prices in
the illiquid sectors. Thus pricing is interrelated across the range of derivatives products. If a

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 23
The PRM Handbook – III.A.4 Stress Testing

large dealer were to pull back from trading, the resulting reduction in market liquidity would
create losses at other institutions, as markets repriced to reflect the reduced liquidity. In some
instances, less liquid instruments might become uneconomical to trade, forcing institutions to
exit those positions, causing further price impacts in a cascading effect.

The data needs for an institution to construct such a stress test are great. But, as with any stress-
testing method, value-added information is still possible even though the reality falls short of the
ideal. Firstly, an institution must first incorporate counterparty information in its risk database.
Secondly, an institution should be able to identify the impact on required collateral of a proposed
set of systemic shocks. This may be especially difficult for institutions with many types of
instruments held with a prime broker. Prime brokers will take a portfolio view of their
counterparty risk and the algorithm they apply for determining collateral may not be transparent.
Thirdly, an institution should estimate the distribution of positions across the financial system.
For example, the institution should ask whether dealer A is the only dealer making a market in
certain instruments in the institution’s portfolio; who are the others and what is the market share
of each; and whether the market is one-sided (e.g., the dealer is long in comparison to most
counterparties and is primarily relying on hedges to control overall risk) or two-way (the dealer’s
book has a balance of long and short positions). For the instruments that the institution uses in
its own hedging, it should identify the other major institutions that either make markets in those
instruments or heavily employ those instruments and estimate market share. Since much of this
information is not internal to the institution, some estimation will be necessary. Some potentially
useful sources of data for this type of stress test are BIS statistical summaries, reports of
derivatives activities of banks published by the US Comptroller of the Currency, and the
Commodities Futures Trading Commission commitments of traders’ reports.

As is the case with data, specifying shocks can be a significant challenge as well. Some historical
guidance is available by reviewing the behaviour of markets and spreads in periods of illiquidity,
such as October 1998. It may be possible to use the institution’s own trading data to estimate
market impact functions in certain instruments. An impact function measures the cost of trading
as a function of position size, given other parameters that describe the trading environment (e.g.,
volume and volatility). The institution’s own pricing models may be used to estimate the price
impact on positions of a wider bid–ask spread. Ultimately, it will be necessary to rely heavily on
intelligent guesstimates of possible shocks.

Supranational organisations that have an interest in the financial stability of the global economy
are embracing a similar approach to stress testing, the most prominent bit being the Financial
Stability Assessment Program. The umbrella organisation for these efforts is the Financial

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 24
The PRM Handbook – III.A.4 Stress Testing

Stability Forum (www.fsforum.org), with the actual work being undertaken by the International
Monetary Fund (www.imf.org) and the World Bank (www.worldbank.org) in conjunction with
local country supervisory authorities. See, for example, Blaschke et al. (2001) and International
Monetary Fund (2003) for an overview of this effort.

III.A.4.7.4 Sensitivity Analysis


Sometimes it may be desirable to create simple, somewhat artificial portfolio shocks, in a method
referred to as sensitivity analysis. In this type of stress test at most a few risk factors are shocked
and correlation is typically ignored. These are easy to implement, but they only provide a partial
picture, and must be accompanied by a lot of judgement on the part of the risk manager.
Examples of sensitivity analysis are a parallel shift of the yield curve, or a 10% drop in equity
prices. The Derivatives Policy Group (1995) report contains recommendations for a parallel
yield curve shift of 100 basis points, and for curve steepening and flattening of 25 basis points.10

Since the biggest losses do not always correspond to the largest moves in factors, it is common in
scenario analysis to create a ‘ladder’ of shocks, in which price impacts are calculated for
intermediate values of the risk factors. Design of these ‘sensitivity ladders’ requires attention to
two issues: granularity and range.

The range of shocks should be wide enough to encompass both likely and unlikely (but plausible)
moves in the market factors. The recommendations of the Derivatives Policy Group noted
above might have represented an adequate range in 1995, but would probably be considered
inadequate in 2003–2004, when interest-rate volatility was very high. Similarly, the range selected
should take into account the time horizon for the analysis, with the range increasing with the
horizon (except, perhaps, for very strongly mean-reverting risk factors). For these reasons, it
makes sense to use the volatility (or perhaps, empirical percentiles of the percentage change) of
the factor to determine the range.

The granularity of the analysis refers to the distance between the rungs of the ladder, or more
exactly, the increment chosen for the change in the risk factor. Granularity should reflect the
nature of the portfolio. A portfolio whose valuation function is linear in the risk factor can have

10 Implementing even these simplistic hypothetical market moves contains some hidden issues that can have a big

impact on the results. For example, to implement changes in curve slope, a point on the curve must be chosen around
which to rotate the curve, and a second point on the curve must be chosen from which to measure the amount of
steepening or flattening. These points should reflect the market environment and the manner in which the curve is
traded by market participants. A fair place to start might be to take the two-year point as the point of rotation and the
10-year point as the point to measure the change in basis points.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 25
The PRM Handbook – III.A.4 Stress Testing

a larger increment than a highly nonlinear portfolio with some instruments whose payouts might
be discontinuous in the risk factor.

III.A.4.7.5 Hybrid Methods


Kupiec (1998) proposes a methodology that is a particular hybrid of covariance matrix
manipulation and economic scenarios. His approach can also be applied to the problem of
missing historical data in specifying shocks to be used in a historical scenario. In his approach,
which he calls ‘stress VaR’, the risk manager ‘can specify partial “what if” scenarios and use the
VaR structure to specify the most likely values for the remaining factors in the system’.

Assume that the risk manager wishes to specify the shocks to a subset of the market risk factors.
Using the covariance matrix of factor returns and this subset of fixed shocks, it is then possible
to compute a conditional mean vector and a conditional covariance matrix for the remaining risk
factors. Using the resulting conditional distribution of factor returns (the factors with fixed
shocks have means given by the shocks, zero variance, and hence zero correlations), a
conditional, ‘stress’ VaR can then be calculated. Kupiec goes on to demonstrate how this
approach can be generalised to the case in which selected variances and covariances are given
prespecified shocks as well. Assuming that market risk factors are jointly normally distributed
makes the approach very tractable.

Using the unconditional covariance matrix as the starting point for this approach is potentially
very limiting, if it is true that in crisis periods there is a structural change in the relationships
among market risk factors. However, in applying this approach, it is not necessary to create the
conditional return distribution from the same covariances as are used in an unstressed VaR. Any
covariance matrix may be taken as a starting point, such as the historical covariance matrix
modified in the manner of Finger (1997) discussed above. Alternatively, Kim and Finger (2000)
employ Kupiec’s method after first estimating a stress-environment covariance matrix. They
assume that returns are drawn from a mixture of two normals, one the stress environment and
the other a normal environment, calculating the ‘stress VaR’ using the estimated parameters of
the stress-environment return distribution. Perhaps more simply, the stress VaR could be
calculated using the covariance matrix estimated from a particular crisis period.

III.A.4.8 Algorithmic Approaches to Stress Testing


One of the deficiencies of historical and hypothetical stress scenarios is that the user has only a
fuzzy level of confidence that the full extent of the potential badness for the portfolio has been
exposed. A more systematic approach might yield a greater level of comfort. The goal is to

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 26
The PRM Handbook – III.A.4 Stress Testing

create a search algorithm to identify the worst outcome for the portfolio within some defined
feasible set; that is to say, an optimisation is performed. The key issues with such approaches are
the following:
x Are the relationships (e.g., correlations or other measures of interdependence) used in the
optimisation relevant for identifying worst-case scenarios?
x Is the algorithm capable of identifying the globally worst-case outcome in the feasible set?
x Does the feasible set include implausible outcomes, and are those outcomes represented
disproportionately in the optimal results?

Two approaches are discussed below, namely, factor-push and maximum loss.

III.A.4.8.1 Factor-Push Stress Tests


This type of stress test is so named because it involves ‘pushing’ each individual market risk
factor in the direction that results in a loss for the portfolio. Construction is straightforward.
1. A push magnitude, m, is selected; it can be stated as a number of standard deviations.
For example, each market risk factor, r, may be pushed four standard deviations. The
magnitude chosen is, ultimately, subjective. It may be chosen with reference to (an
average of) observed movements in market prices during some significant historical
event. Or it may be chosen to correspond to some quantile of returns based on an
assumed distribution (or perhaps an empirical distribution quantile). If you are setting
the push magnitude using standard deviations or quantiles, the period over which these
are estimated must be chosen as well. If you are using unconditional estimates, then one
year of daily return history is good, if available. When using a methodology for
estimating volatility conditionally, such as GARCH, then it is good to use as much return
history as there is reliable data.
2. The portfolio, P, is revalued twice by applying shocks, s, to a single market risk factor:
once applying a shock s+ = (+1 × m), and once applying a shock s– = (–1 × m).
3. The two portfolio revaluations are compared and the shock resulting in the lower
portfolio value is adopted for the stress test.
4. Steps 2 and 3 are repeated for each of the N market risk factors affecting the portfolio.
5. The portfolio is revalued once more, this time simultaneously applying the shocks
selected in the prior steps for each risk factor.

Consider the following example. A portfolio consists of a long position of 1000 shares in IBM
and a short position of 1700 shares in GM. On 23 February 2004 the closing prices of the two
equities were $95.96 and $47.51, respectively. Their respective daily standard deviations of return
(based on one year of closing prices) were 0.005955 and 0.006972, respectively. Set the push
magnitude to 6 (more on this below). The shock for IBM will be +1, because the position is long

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 27
The PRM Handbook – III.A.4 Stress Testing

and the return is linear in the price change. The shock for GE will be –1, by analogous
reasoning. The factor-push portfolio stress loss is then $6807.09.

The selection of the push magnitude is subjective and ignores correlations. Note that the largest
one-day move downward in IBM in the five years ending 23 February 2004 is –9.31%, or about
15.63 times the estimated standard deviation of IBM. The third largest move downward is still
7.78 times the standard deviation. The largest one-day move downward in GE in the same
period is –6.31%, or 9.06 times the estimated standard deviation of GE. The third largest move
down in GE is 5.35 times the standard deviation.

If we assume, as in a historical simulation, that any of the one-day IBM–GE return pairs
observed over that five year period were equally possible when looking forward one day, then the
potential worst-case portfolio loss would be $22,175.00, much bigger than the factor-push loss.
Of course, the factor-push loss can be made arbitrarily large by increasing the push magnitude.
However, the goal is not to create an arbitrarily large loss, but rather to estimate a plausible, if
unlikely, loss.

Small perturbations of the push magnitudes, if applied to individual positions, can result in
greater estimated losses. For example, if the magnitude applied to IBM is reduced to 5.99 and
the magnitude applied to GE is increased to 6.01, the factor-push loss will increase (slightly)
because GE has the greater estimated standard deviation. This fact highlights the implicit
assumption here that the marginal return distributions have the same shape and thus the marginal
probabilities of the moves are equal. It may be better to select push-magnitudes position by
position, based on the volatility of each factor.

A simple thought experiment also illustrates that the factor-push loss need not be greater than
the losses from all unambiguously smaller moves, if some of the portfolio positions have returns
that are nonlinear functions of the market risk factors. A long option straddle has its least payout
associated with small moves in the underlying market factor. Since the factor-push method only
‘searches’ among large moves for the worst outcomes, the factor-push method will be less useful
in such cases.

This type of stress test has a variety of other drawbacks as well. Most importantly, these stress
tests ignore correlations among risk factors. As a result, the specification of the factor changes
employed in the stress test may imply highly implausible market dynamics. For example,
neighbouring tenor points of a given yield curve are generally very highly positively correlated.
Consider a situation where the push factor is 4 standard deviations, and the stress shock at the

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 28
The PRM Handbook – III.A.4 Stress Testing

three-month point on a given yield curve is –1, while at the adjacent six-month point the shock is
+1. Such a scenario fails the ‘laugh test’. Also, cross effects in instruments whose values are
affected by multiple risk factors are ignored. For example, for an option on an equity, the option
delta is itself related to the volatility of the equity. Thus the correct choice of shock, +1 or –1,
for the equity price is dependent on the choice of shock for the volatility. Choosing the push
factor based on the standard deviation, or even the VaR, may defeat the purpose of the test. If it
is the case that extreme events are, in effect, observations from a distribution that is distinct from
that which is experienced most of the time, there may be no obvious choice of push factor for
the portfolio that ‘maps’ from the estimated standard deviations to a stress environment. The
particular set of shocks that are chosen can change with every run of the stress test. This makes
it more difficult to communicate intuition about the nature of the shocks that generated the
observed stress-test result.

III.A.4.8.2 Maximum Loss


A maximum loss scenario is defined by the set of changes in market risk factors that results in the
greatest portfolio loss, subject to some feasibility constraint on the allowable changes in market
risk factors (see Studer, 1995, 1997). The constraint is necessary because potential portfolio
losses need not be bounded, and thus, absent a constraint, the result may lack plausibility. Note
that the maximum loss scenario is dependent on the structure of the portfolio, and not related to
any particular economic environment, except as an environment is reflected in the statistical
description of market factor returns employed in the analysis. Thus it is well adapted to answer a
question such as just how bad can things plausibly get.11

One natural method of constraining the set of feasible risk factors is to consider only those
scenarios that have a likelihood in excess of some small probability. Being able to associate a
probability with a stress-test loss is potentially very useful. To proceed down this path is it
necessary to be able to state joint probabilities of specific sets of risk factor changes, which in
turn requires that some statements be made about the co-movements of risk factors. Those
statements will be based on either historical or simulated data and can be conditional (e.g.,
stressed) or unconditional (‘normal’).

Note that, in general, to apply this approach, it is necessary to search through the entire space of
risk factor changes with joint probability less than or equal to the plausibility constraint. With

11 Boudoukh et al. (1995) propose a risk measure they call the worst-case scenario (WCS). More akin to conditional
VaR than stress testing, WCS is the expected value of the distribution of maximum loss over some time horizon, for
example, the expected worst-day portfolio P&L over a month. It is also related to the extreme-value theory method
discussed below.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 29
The PRM Handbook – III.A.4 Stress Testing

positions whose values are nonlinear functions of risk factors in the portfolio, the maximum loss
need not occur at the limits of the allowable risk factor changes. Breuer and Krenn (2000)
suggest Monte Carlo simulation (or some quasi-random method) as a natural method to use in
this regard. A naïve approach would be to run the Monte Carlo for a predetermined sample size,
throw out any draw of (an n-tuple of) factor returns with a corresponding theoretical probability
less than the plausibility constraint, and pick from the remaining sampled (n-tuples of) factor
returns the one with the largest associated portfolio loss.

Practically, however, too many simulated risk factor vectors would be required to attain a high
level of confidence that the maximum loss had been identified using this approach. Quasi-
random methods may ensure a more efficient sampling of the feasible return space. However,
since a complex portfolio may have many local return minima and possibly many discontinuities
in the portfolio return function, even quasi-random methods may not make the search for that
maximum loss tractable. To employ this approach in a practical manner, it may be necessary to
develop a ‘smart’ search algorithm that, based on the qualities of the positions in the portfolio,
makes guesses as to which parts of the return space can be ignored.12

III.A.4.9 Extreme-Value Theory as a Stress-Testing Method


Extreme-value theory (EVT) is based on limit laws which apply to the extreme observations in a
sample. These laws allow parametric estimation of high quantiles of loss (negative return)
distributions without making any (substantial) assumptions about the shape of the return
distribution as a whole. The application to stress testing is immediate (sort of). Firstly, it must be
assumed that the historical data employed in the estimation of the tail are representative (i.e., are
drawn from the distribution relevant for examining the stress event). Second, the size of the
historical data sample must be large enough to yield enough ‘tail’ observations to get good
estimates of the tail distribution’s parameters.

Two flavours of EVT have been employed in looking at risk measurement in finance. The first is
called the ‘block maxima’ approach. An example is the set of yearly maxima of negative daily
returns on the S&P500 over some period. The second method is the ‘peak-over-threshold’
approach, illustrated by the greatest z (i.e., some natural number of) negative daily returns on the
S&P500 over the same period. It is apparent that the two methods have somewhat different
definitions of extreme events. The block maxima approach may be better suited to estimation of
stress losses and the peak-over-threshold approach may be better suited to VaR estimation. If we
take the block to be a year, then we can use this approach to find the loss that is expected to be

12 Gonzalez-Rivera (2003) develops an interesting related approach.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 30
The PRM Handbook – III.A.4 Stress Testing

exceeded once in every k years. The details of this approach are beyond the scope of this
chapter. For an application of the approach to stress testing, see Cotter (2000).

III.A.4.10 Summary and Conclusions


The main points of this chapter are as follows:
™ Stress testing is perceived as being a useful supplement to value-at-risk because value-at-risk
does not convey complete information about the risk in a portfolio.
™ Stress tests should be part of an rational economic approach to decision making. However,
in practice, most uses of stress results are somewhat ad hoc.
™ Useful stress tests must represent plausible, if unlikely, factor changes. Creating useful stress
tests requires detailed consideration of the potential behaviour of the market risk factors,
individually and jointly, and an understanding of structural relationships among risk factors
(e.g., no arbitrage requirements and spread relationships).
™ Stress-test methods generally fall into three categories, historical scenarios, hypothetical
scenarios, and algorithms.
¾ Historical scenarios seek to re-create a particular economic environment from the past,
¾ Hypothetical scenarios can represent a complete, but not yet experienced, economic
story or simply a set of ad hoc factor movements.
¾ Algorithms attempt to systematically identify the set of factor changes (within some
bounds) that give the worst-case portfolio loss. In the case of the factor-push method,
the result may not represent a plausible economic story.
™ There is no best or right type of stress test. The context in which the results will be used
should determine the approach to be taken in stress testing.

Further Reading
Most of the papers listed here and in the References following may be found on
www.GloriaMundi.org

Basel Committee on Banking Supervision (1999) Recommendations for public disclosure of


trading and derivatives activities of banks and securities firms. Mimeo (October).

Bouyé, E (2002) Multivariate extremes at work for portfolio risk management. Working Paper
(January).

Bouyé, E, Durrleman, V, Nikeghbali, A, Riboulet, G, and Roncalli, T (2000) Copulas for finance:
A reading guide and some applications. Working Paper (July).

Bouyé, E, Durrleman, V, Nikeghbali, A, Riboulet, G, and Roncalli, T (2001) Copulas: An open


field for risk management. Working Paper (March).

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 31
The PRM Handbook – III.A.4 Stress Testing

þihák, Martin (2004a) Designing stress tests for the Czech banking system. Czech National
Bank, mimeo (March).

þihák, Martin (2004b) Stress testing: A review of key concepts. Czech National Bank, mimeo
(February).

Costinot, A, Riboulet, G, and Roncalli, T (2000) Stress testing et théorie des valeurs extrêmes:
Une vision quantifiée du risque extrême. Working paper, Credit Lyonnais (September).

Hong Kong Monetary Authority (2003) Stress testing. Supervisory Policy Manual IC-5
(February).

International Association of Insurance Supervisors (2003) Stress testing by insurers. Guidance


paper (October).

Jouanin, J-F, Riboulet, G, and Roncalli, T (2004) Financial applications of copula functions. In G
Szego (ed.), Risk Measures for the 21st Century. New York: Wiley.

Oesterreichische Nationalbank (1999) Guidelines on market risk, Volume 5: Stress testing.


Mimeo.

Wee, L-S, and Lee, J (1999) Integrating stress testing with risk management. Bank Accounting &
Finance (Spring), pp. 7–19.

References
Basel Committee on Banking Supervision (1994) Risk management guidelines for derivatives.
Mimeo (July).

Basel Committee on Banking Supervision (1995) An internal model-based approach to market


risk capital requirements. Mimeo (April).

Basel Committee on Banking Supervision (1996) Amendment to the capital accord to


incorporate market risks. Mimeo (January).

Basel Committee on Banking Supervision (2002) Basel Committee reaches agreement on new
capital accord issues. Press release (10 July).

Berkowitz, J (1999) A coherent framework for stress testing. Journal of Risk, 2 (Winter), pp. 1–
11.

Blaschke, W, Jones, M T, Majnoni, G, and Martinez Peria, S (2001) Stress testing of financial
systems: An overview of issues, methodologies and FSAP experiences. Mimeo (June).

Borio, C (2000) Market liquidity and stress: Selected issues and policy implications. BIS Quarterly
Review (November), pp. 38–51.

Boudoukh, J, Richardson, M, and Whitelaw, R (1995) Expect the worst. Risk, 8(9), pp. 100–101.

Boyer, B, Gibson, M, and Loretan, M (1999) Pitfalls in tests for changes in correlations. Working
paper, Federal Reserve Board.

Breuer, T, and Krenn, G (2000) Identifying stress test scenarios. Working paper.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 32
The PRM Handbook – III.A.4 Stress Testing

Cherubini, U, and Della Longa, G (1999) Stress testing techniques and value-at-risk measures: A
unified approach. Rivista di Matematica per le Scienze Economiche e Sociali, 22(1/2), pp. 77–99.

Committee on the Global Financial System (2001) A survey of stress tests and current practice at
major financial institutions. Mimeo (April).

Comptroller of the Currency (1993) Banking Circular 277: Risk management of financial
derivatives. Mimeo (October).

Corsetti, G, Pericoli, M, and Sbracia, M (2002) Some contagion, some interdependence. Working
paper, Yale University.

Cotter, J (2000) Crash and boom statistics for global equity markets. Working paper, University
College Dublin.

Culp, C and Miller, M (eds) (1999) Corporate Hedging in Theory and Practice: Lessons from
Metallgesellschaft. London: Risk Publications.

Davis, E P (2003) Towards a typology for systematic financial instability. Working paper, Brunel
University (November).

Derivatives Policy Group (1995) Framework for voluntary oversight. Mimeo (March).

Dungey, M, and Zhumabekova, D (2001) Testing for contagion using correlations. Working
paper, Australian National University.

Fender, I, and Gibson, M (2001a) The BIS census on stress tests. Risk, 14(5), pp. 50–52.

Fender, I, and Gibson, M (2001b) Stress testing in practice: A survey of 43 major financial
institutions. BIS Quarterly Review (June), pp. 58–62.

Fender, I, Gibson, M, and Mosser, P (2001) An international survey of stress tests. Current Issues
in Economics and Finance, 7(10), pp. 1–6.

Finger, C (1997) A methodology to stress correlations. RiskMetrics Monitor, 3-12.

Forbes, K, and Rigobon, R (2002) No contagion, only interdependence. Journal of Finance, 43.

Frye, J (1997) Principals of risk: Finding value-at-risk through factor-based interest rate scenarios.
In S Grayling (ed.), Understanding and Applying Value at Risk. London: Risk Books.

Gay, G, Kim, J, and Nam, J (1999) The case of the SK Securities and J.P. Morgan swap: Lessons
in VaR frailty. Derivatives Quarterly, 5(Spring), 13–26.

Gilbert, C L (1996) Manipulation of metals futures: Lessons from Sumitomo. Working paper,
University of London (November).

Global Derivatives Study Group (1993) Derivatives: Practices and Principles. Washington, DC:
Group of Thirty

Gonzalez-Rivera, G (2003) Value in stress: A coherent approach to stress testing. Journal of Fixed
Income (September), pp. 7–18.

Greenspan, A (2003) Corporate governance. Speech at the Conference on Bank Structure and
Competition (May).

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 33
The PRM Handbook – III.A.4 Stress Testing

Hickman, A, and Jameson, R (2001) Benchmarking the US attack crisis. ERisk.com


(September).

Higham, N J (2002) Computing the nearest correlation matrix – a problem from finance. IMA
Journal of Numerical Analysis, 22, pp. 329–343.

Hoggarth, G, and Whitley, J (2003) Assessing the strength of UK banks through macroeconomic
stress tests. Financial Stability Review, 3(6), pp. 91–103.

International Monetary Fund (2003) Analytical tools of the FSAP. Mimeo (February).

Jeffery, C (2003) The ultimate stress test: Modeling the next liquidity crisis. Risk (November).

Jorion, P (1995) Big Bets Gone Bad. Amsterdam: Elsevier Science.

Jorion, P (2002) Risk management in the aftermath of September 11. Working paper, University
of California-Irvine (April).

Kim, J, and Finger, C (2000) A stress test to incorporate correlation breakdown. Journal of Risk,
2(1).

Kupiec, Paul (1998) Stress testing in a value at risk framework. Journal of Derivatives, 6, pp. 7–24.

Loretan, M, and English, W (2001) Evaluating correlation breakdown during periods of market
volatility. Working paper.

Malz, A, and Mina, J (2001) Risk management in the aftermath of the terrorist attack. Working
paper, RiskMetrics Group (September).

MacKenzie, D (2003) An equation and its words: Bricolage, exemplars, disunity and
perfomativity in financial economics. Working paper.

Overdahl, J and Schachter, B (1995) Derivatives regulation and financial management: Lessons
from Gibsons Greetings. Financial Management, 24, pp. 68–78.

President’s Working Group on Financial Markets (1999) Hedge funds, leverage, and the lessons
of Long Term Capital Management. Mimeo.

Rebonato, R, and Jäckel, P (1999) The most general methodology to create a valid correlation
matrix for risk management and option pricing purposes. Journal of Risk, 2 (2).

Schachter, B (1998a) The value of stress testing in market risk management. Derivatives Risk
Management (March).

Schachter, B (1998b) Move over VaR. The Financial Survey (May), 12–14.

Schachter, B (2000a) How well can stress testing complement VaR? In, T. Haight, Derivatives
Risk Management. (Arlington, Virginia) A. S. Pratt & Sons.

Schachter, B (2000b) Stringent stress tests. Risk, 13, (December), pp. S22–S24.

Securities and Futures Authority Limited (1995) Value-at-risk models. Board Notice 254, 26
May.

Shaw, J (1997) Beyond VaR and stress testing. In VaR: Understanding and Applying Value-at-Risk.
New York: Risk Publications, pp. 211–224.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 34
The PRM Handbook – III.A.4 Stress Testing

Studer, G (1995) Value at risk and maximum loss optimization. Working paper, ETHZ
(December).

Studer, G (1997) Maximum loss for measurement of market risk. Doctoral thesis, Swiss Federal
Institute of Technology.

Turkay, S, Epperlein, E, and Nicos, C (2003) Correlation stress testing for value-at-risk. Journal of
Risk, 5(4), pp. 75–89.

Zangari, P (1997a) Exploratory stress-scenario analysis with applications to EMU. RiskMetrics


Monitor, Special Edition, pp. 31–54.

Zangari, P (1997b) Catering for an event. Risk (July), pp. 34–36.

Copyright © 2004 B. Schachter and The Professional Risk Managers’ International Association. 35
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

III.B.2 Foundations of Credit Risk Modelling


Philipp Schönbucher1

III.B.2.1 Introduction
This chapter introduces the three basic components of a credit loss: the exposure, the default
probability and the recovery rate. We define each of these as random processes, that is to say, their
future values are not known. Instead, their values are represented as a probability distribution of
a random variable. The credit loss distribution is then defined as a product of these three
distributions. The credit loss distribution for a large portfolio of credits can become quite
complex and we shall see in Chapter III.B.5 that many advanced techniques are available for
portfolio modelling. In this chapter we merely aim to provide an introduction to this distribution.

Section III.B.2.2 makes the definition of default risk precise, and Sections III.B.2.3 and III.B.2.4
introduce the three basic processes and the credit loss distribution. Section III.B.2.5 makes a
careful distinction between expected and unexpected loss, and Section III.B.2.6 gives a detailed
discussion of recovery rates for a portfolio of credits. Section III.B.2.7 summarizes and
concludes.

III.B.2.2 What is Default Risk?


Default risk is the risk that a counterparty does not honour his obligations. Such an obligation may
be a payment obligation, but it is also a default if a supplier does not deliver the parts he
promised to deliver, or if a contractor does not render the services that he promised. In this wide
context, default risk is everywhere, and there is no transaction that does not involve the risk that
one of the two parties involved does not deliver.

For a financial institution the largest and most important component of default risk refers to
payment obligations such as loans, bonds, and payments arising from over-the-counter
derivatives transactions. This risk of a payment default, in particular when it refers to loans and
bonds, is called credit risk. We distinguish:

x Default: An obligation is not honoured.


x Payment default: An obligor does not make a payment when it is due. This can be:
o Repudiation: Refusal to accept a claim as valid.

1 D-MATH, ETH Zürich, Rämistrasse 101, 8092 Zürich

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 1
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

o Moratorium: Declaration to stop all payments for some period of time. Usually
only sovereigns can afford to do this.
o Credit default: Payment default on borrowed money (loans and bonds).
x Insolvency: Inability to pay (even if only temporary).
x Bankruptcy: The start of a formal legal procedure to ensure fair treatment of all creditors
of a defaulted obligor.

For instance, an obligor may be:


x in default but not in payment default (e.g. if he defaults on a non-financial obligation),
x in payment default but not insolvent (e.g. if he could pay, but chooses not to), and
x in default but not in bankruptcy (e.g. if the bankruptcy procedures have not been started
(yet) or if there are no bankruptcy procedures, e.g. if the obligor is a sovereign).

Strictly speaking, default risk is tied to the obligation it refers to, and a priori there is no reason
why an obligor should not default on one obligation and honour another. Fortunately, in most
cases we can rely on the existence of a functioning legal system which ensures that such selective
defaults are not possible. The creditor of the defaulted obligation has the right to go to a court
which (eventually) will force the obligor to honour his obligation (if this is possible) or – if the
obligor is generally unable to do so – instigate a formal bankruptcy procedure against him. The
aim of this procedure is an orderly and fair settlement of the creditors’ claims and possibly also
other social priorities such as the preservation of jobs. The details of the bankruptcy procedure
depend on the applicable local bankruptcy law and will vary across countries. Thus, the existence
of a bankruptcy code allows us to speak of the credit risk of an obligor and not just of an
individual obligation, which we will do in the following.

III.B.2.3 Exposure, Default and Recovery Processes


To analyse the components of default risk in more detail we now need to introduce some
notation. We consider a set of I obligors indexed with i = 1, … , I, and we call ƴi the time of
default of the i th obligor. The following three processes describe the credit risk of obligor i:

x Ni(t): The default indicator process. At time t, Ni(t) takes the value one if the default of
obligor i has occurred by that time, and zero if the obligor is still alive at time t.
Obviously knowledge of the full path of the default indicator process is equivalent to
knowledge of the exact time of default of the obligor, but frequently we only have partial
knowledge (i.e. the obligor has not defaulted so far), or we are only interested in a partial
event (i.e. default before the maturity of a loan).

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 2
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

x Ei(t): The exposure process. The exposure at default (EAD) to obligor i at time t is the total
amount of the payment obligations of obligor i at time t which would enter the
bankruptcy proceedings if a default occurred at time t. The exposure process will be
covered in detail in Chapter III.B.3.
x Li(t): The loss given default (LGD) of obligor i, given default at time t. The LGD usually
takes values between zero and one, and Ri(t) = 1 – Li(t) is known as the recovery rate of
the obligor. The LGD is often less than one, reflecting the fact that some proportion of
the exposure at default may be recovered in bankruptcy proceedings. We will discuss
recovery rates in more detail in Section III.B.2.6 .

Let pi(T) be the individual probability of default (PD) of obligor i until some time horizon T. For
instance, if the one-year probability of default is pi(1) = 0.01%, this means that there is a 1/10,000
chance that the obligor will default at some point in time over the next year.

Using these processes, the losses due to defaults of obligor i before time T can be represented as
follows:

Default loss = Default arrival × EAD × LGD

or, in our mathematical notation where ƴ denotes the time of default of the obligor,
Di(T) = Ni(T) ×Ei(ƴ) × Li(ƴ). (III.B.2.1)

For a fixed time-horizon T, the default indicator Ni(T) is a binary (0/1) variable which allows one
to capture the default arrival risk, that is, the risk whether a default occurs at all. A fixed time-
horizon (typically one year) is a common point of view in credit risk management, but in many
cases this is not sufficient and the timing risk of defaults must also be considered.

III.B.2.4 The Credit Loss Distribution


For a bank, individual credit defaults of obligors are not unusual events and – although painful
and inconvenient – these events are part of the normal course of business. If the exposure is not
too large it can be buffered using normal operating cash flows. But when multiple defaults occur
simultaneously (or within a short time span) this can threaten the existence of a financial
institution. Thus, a major task of the credit risk manager is to measure and control the risk of
losses from a whole portfolio of credits. For instance, suppose the credit losses are correlated, so
that a default from one obligor makes default of other obligors more likely. Then there is a
concentration risk in the portfolio that needs to be managed.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 3
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

Using the definition of individual default losses (III.B.2.1), the portfolio loss D(T) at time T is
defined as the sum of the individual credit losses Di(T), summed over all obligors i = 1, … , I:
I I
D( T ) ¦ Di (T )
i 1
¦ N (T ) ˜ E (ƴ) ˜ L (ƴ).
i 1
i i i (III.B.2.2)

The individual default losses Di(T) are unknown in advance. Thus the full portfolio loss D(T) is a
random variable. The portfolio’s credit loss distribution is the probability distribution of this random
variable (see Chapter II.E).
F ( x ) : P[ D( T ) d x ]. (III.B.2.3)

An important ingredient of the distribution of D(T) is the dependency between the individual
losses. In Chapter III.B.4 some popular models are presented which show how default
correlation can be modelled. Here we only mention two important points:

(a) Assuming independence between individual default losses almost always leads to a
gross underestimation of the portfolio’s credit risk.

(b) The default correlation parameters typically have a strong influence on the tail of the
loss distribution – and thus on the value at risk (VaR).

Figure III.B.2.1: Density function of the portfolio loss of a typical loan portfolio. Mean
loss (expected loss), 99% VaR, and 99.9% VaR are shown as vertical lines
Mean Loss 99% VaR 99.9% VaR
14.00%

12.00%

10.00%

8.00%
Probability

6.00%

4.00%

2.00%

0.00%
0 50 100 150 200 250 300 350 400
Loss

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 4
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

Figure III.B.2.1 shows the density function of the credit loss distribution of a typical credit
portfolio using a hypothetical portfolio of 100 obligors with 10m exposure each (i.e. 1bn total
portfolio volume). We have used a CreditMetrics type of model (see Section III.B.5.3) with
individually varying unconditional default probabilities, 50% assumed recovery rate and a
constant asset correlation of 20%.

Credit portfolio loss distributions have several features that distinguish them from the ‘profit and
loss’ distributions (or returns distributions) from market variables such as equities, interest rates
or FX markets:

x The distribution is not symmetrical. The ‘upside’ is limited (the best possible case is a credit
loss of zero), while the downside can become extremely large.
x The distribution is highly skewed. Most of the probability mass is concentrated around the
low loss events (e.g. losses between 0 and 50 in Figure III.B.2.1). These are also the
events that we are most likely to observe in historical data sets. In the example of Figure
III.B.2.1, losses will be less than 60m in 80% of the cases.
x The distribution has heavy tails. This means that the probability of large losses decreases
very slowly, and VaR quantiles are quite far out in the tail of the distribution. This can
also be seen from Figure III.B.2.1.

III.B.2.5 Expected and Unexpected Loss


The ‘standard scenario’ which people intuitively expect to happen when they consider the default
risk of an obligor is ‘no default’, that is, no loss. This is indeed the most likely scenario if we
consider each obligor individually (unless we consider an obligor of extremely low credit quality).
This scenario is also still used quite frequently for accounting purposes: a loan or bond is booked
at its notional value (essentially assuming zero loss), and only if it gets into distress is it
depreciated. In some institutions return on capital is still (incorrectly) calculated this way.

Unfortunately, this is one of the cases wheren naive intuition can lead us astray: the ‘standard
scenario’ is not the mathematical expectation of the loss on the individual obligor. If we assume
that exposure Ei and loss given default Li are known and constant, the expected loss is

E[ Di ( T )] pi ( T ) ˜ Ei ˜ L i z 0, (III.B.2.4)

where pi(T) is the default probability of the obligor.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 5
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

At the level of individual obligors in isolation, the concept of expected loss may be counter-
intuitive at first because we will never observe a realisation of the expected loss: either the obligor
survives (then the realised loss will be zero) or the obligor defaults (then we will have a realised
loss which is much larger than the expected loss).

There is a related trick question. Next time you go out, offer to buy your friend a drink if the next
person entering the bar does not have an above average number of legs. Of course your friend
will have to buy you a drink if the person does indeed have an above average number of legs.
You will win the bet if the next person has two legs: the average number of legs per person in the
population must be slightly less than two because there are some unfortunate people who have
lost one or both legs (but there are no people with more than two legs).

The same idea applies to credit obligors: most obligors will perform better than expected (they
will not default), but there are some who perform significantly worse than expected. But nobody
will perform exactly as expected.

Typically, the expected loss is small (because pi will be small) but it is positive. These small errors
will accumulate when we consider a portfolio of many obligors. In a portfolio of 1000 obligors
we may no longer assume that none of these obligors will default. Even if each of the obligors
has a default probability of only 1%, we will have to expect 10 defaults. In Figure III.B.2.1 the
level of the expected loss of the portfolio is shown by the first (leftmost) vertical line; it is at a
level of about 35m, that is, 3.5% of the portfolio’s notional amount.

Expected loss is an important concept when it comes to performance measurement, in particular


in connection with risk-adjusted return on capital (RAROC) calculations (see Chapter III.0).
When a loan’s expected gain (in terms of excess earnings over funding and administration costs)
is not sufficient to cover the expected loss on this loan, then the transaction should not be
undertaken. Suffering the expected loss (in particular, on a portfolio) is not bad luck: it is what
you should expect to happen. Consequently, the expected loss should be covered from the
portfolio’s earnings. It should not require capital reserves or the intervention of risk management.

The expected loss on a portfolio is the sum of the expected losses of the individual obligors:2
I
E[ D( T )] ¦ E[ D (T )].
i 1
i (III.B.2.5)

2 This follows from the property of the expectation operator: E(X + Y) = E(X) + E(Y) – see Chapter II.E.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 6
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

However, this simple summation property will not hold for the unexpected loss! Unexpected loss is
usually defined with respect to a VaR quantile and the probability distribution of the portfolio’s
loss. Let us assume that D99% is the portfolio’s 99% VaR quantile, that is,

P[ D( T ) d D99% ] 99%.

Then the unexpected loss of the portfolio at a VaR quantile of 99% is defined as the difference
between the 99% quantile level and the expected loss of the portfolio:

UEL D99%  E[ D(T )] (III.B.2.6)

If another risk measure such as conditional VaR is used in place of VaR, then (III.B.2.6) is easily
extended. We define unexpected loss in these situations by replacing D99% with the general risk
measure. More details on some alternative risk measures are given in Section III.A.3.5.2.

The term ‘unexpected loss’ may be confusing at first, because it does not concern losses that
were unexpected, but only something like a worst-case scenario. Intuitively, one might define
unexpected loss as the amount by which the portfolio’s credit loss turns out, in the end, to
exceed the originally expected loss:
max{D( T )  E[ D( T )], 0}. (III.B.2.7)

Here, we will use unexpected loss in the sense of equation (III.B.2.6), and not in the sense of
(III.B.2.7).

In Figure III.B.2.1, the 99% and 99.9% VaR loss quantiles were shown with two vertical lines
intersecting the tail of the loss distribution. The 99% VaR level is at approximately 160m, which
yields an unexpected loss of 160m  35m = 125m. The 99.9% VaR level is at approximately
220m, with an unexpected loss of 185m. It is no coincidence that, even at a very high VaR
quantile of 99.9%, the unexpected loss is still much less than the maximum possible loss of 1bn
that is suffered when the total portfolio defaults with zero recovery. This effect stems from the
partial diversification, which is still present in the portfolio despite an asset correlation of 20%.

As opposed to expected loss, the unexpected loss is not additive in the exposures. If, for
example, we assume that with zero recovery and unit exposure each obligor defaults with a
probability of 3%, then each obligor’s individual 99% VaR will be 1, its total exposure. But the
99% VaR of a large portfolio of such obligors will not be the total exposure of the portfolio
(unless we have the extreme case of perfect dependency between all obligors).

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 7
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

The unexpected loss is frequently used to determine the capital reserves that have to be held
against the credit risk of the portfolio. It is not economically viable to hold full reserves against
total loss of the portfolio, but reserving against unexpected loss at a sufficiently high quantile is
viable and effective if it is done centrally under exploitation of all diversification effects. Thus,
coverage of the (linear) expected loss is in the domain of responsibility of the business lines, but
the management of the (highly non-linear) unexpected loss is usually a task for a centralised risk
management department. The risk management department then makes appropriate risk charges
to the business lines.

A stylised capital allocation procedure is as follows:


1. Fix a VaR quantile for credit losses (usually 99% or 99.9%). This quantile should reflect
the institution’s desired survival probability due to credit losses (this probability can be
derived from its targeted credit rating). This is a management decision that has to be
made at the top level.
2. Determine the portfolio’s expected loss.
3. Determine the unexpected loss of the portfolio according to (III.B.2.6).
4. Allocate risk capital to the portfolio to the amount of the unexpected loss.
5. Split up the portfolio’s risk capital over the individual components of the portfolio
according to their risk capital contributions.

Losses in the portfolio up to the amount of the expected loss will have to be borne by the
individual business lines (because these losses are economic losses), but any losses that exceed the
expected loss will hit the risk capital reserves. Should these reserves not suffice to cover all losses,
the bank itself will have to default. But by setting the original VaR level, the probability of this
event can be controlled.

III.B.2.6 Recovery Rates


As we can see from equations (III.B.2.1) and (III.B.2.4), the recovery rate (or the LGD) of an
obligor is as important in determining default losses or expected default losses as the default
probability. Nevertheless, research into recovery rates has been neglected for a long time, and
most focus has been put on default events and default probabilities. This is partly due to the fact
that data on recovery rates are much more fragmented and unreliable than data on default events.

While we may expect that obligors will do their best to avoid bankruptcy in almost any legal
environment (which will make default arrivals largely independent of the legal framework),
recovery rates are closely tied to the legal bankruptcy procedure which is entered upon the
default of the obligor.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 8
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

In a typical bankruptcy procedure all creditors register their legal claim amounts with the
bankruptcy court. There is a well-defined procedure to determine these legal claim amounts
which does not necessarily reflect the market value of the claim: for example, for loans or bonds
only the notional amount (and any interest payments which are currently due) is considered, but
not the actual market value of the bond or the value of the future coupon payments. These
values may be significant if interest rates have moved since the issuance of the bond. According
to the ISDA standard definitions, the legal claim amount for over-the-counter derivatives is
usually the current replacement value of the contract, where it is assumed that the counterparty
has the same rating as the defaulted counterparty’s pre-default rating.

Claims are then grouped by priority class (collateralised, senior, junior, etc.). At this point, the
bankruptcy procedures start to diverge significantly: some procedures aim to find a way to
restructure the bankrupt obligor and to enable him to become profitable again (e.g. Chapter 11 in
the USA), while others aim is to liquidate the obligor’s business and use the proceeds to pay off
the debts (e.g. Chapter 7 bankruptcy in the USA). The final outcome for the creditors usually
depends first on the choice of bankruptcy procedure, and then on complicated negotiations
between many parties. We will have to ignore these effects here, and just warn the reader that,
because of the strong dependency on procedural details, recovery rates are not easily comparable
across countries.

Generally, the outcome of any bankruptcy procedure will be a settlement in which creditors of
the same legal claim amount and the same priority class will be treated identically, with secured
creditors having the first claim on their collateral and the rest of the firm’s assets, unsecured
creditors come next, and then stockholders have the last claim and may not receive anything.

If the creditor’s claims are settled in cash, the definition of the recovery rate is relatively
straightforward: if each dollar of legal claim amount receives a 40 cent cash settlement, then the
recovery rate is 40% and the LGD is 1 40% = 60%. The only problem is to decide whether the
payment should be discounted back to the actual date of default (in particular, for large obligors,
bankruptcy procedures can easily take several years, and the usual answer is ‘yes’), and whether
legal costs should be subtracted from it (again ‘yes’).

Unfortunately, cash settlements tend to be rare. Much more frequently (in particular, if the
obligor is restructured and not liquidated) the settlement is partly cash, and partly some other
type of security such as equity, preferred equity or restructured debt of the reorganised obligor.
In this case, determining the value of the settlement is often close to impossible, in particular if

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 9
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

the obligor was de-listed from the stock exchange in the course of the bankruptcy. For these
reasons, recovery rates are generally defined without reference to the final settlement.

Definition 1 (Market Value Recovery). The recovery rate is the market value per unit of legal claim
amount of defaulted debt, some short time (e.g. 1 or 3 months) after default.

This definition also coincides with the way recovery rates are determined in credit default swaps
with cash settlement (see Chapter I.B.6). It generally applies to larger obligors (e.g. obligors rated
by public rating agencies).

When smaller obligors are concerned (e.g. retail obligors and/or small and medium-sized
enterprises) there will be no market price for distressed debt and we will have to attach a direct
valuation to the final default settlement, and define the recovery rate as: follows

Definition 2 (Settlement Value Recovery). The recovery rate is the value of the default settlement per
unit of legal claim, discounted back to the date of default and after subtracting legal and administrative costs.

According to the discussion above, we expect the following factors to directly influence the
recovery rates of defaulted debt:

x collateral;
x the legal priority class of the claim;
x the legislature in which the bankruptcy takes place (the UK tends to be a rather creditor-
friendly legislature, while France and the USA are more obligor-friendly and thus have
lower recovery rates).

Some other, less easily observed factors turned out to be significant in empirical investigations
(see, for example, Altman et al., 2001; Gupton et al., 2001; Van de Castle et al., 2001; Renault and
Scaillet, 2004):

x The industry group of the obligor. Financial institutions tend to have significantly different
(higher) recovery rates than industrial obligors. The less capital intensive the obligor’s
business, the less substance there is to liquidate in the event of a bankruptcy. For
instance, ‘dotcom’ companies tended to have recoveries close to zero.
x The obligor’s rating prior to default. An obligor who has spent much time close to default
usually has fewer assets to liquidate to pay off the creditors than an obligor who
defaulted quite suddenly from a high rating class.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 10
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

x The average rating of the other obligors in the industry group, and the business cycle. This affects the
liquidation value of the obligor’s business and/or the value and viability of a restructured
firm. Recoveries tend to be lower in recessions and in industry groups that are in cyclical
downswings or which have large overcapacities.

Despite these empirical findings, it turns out that it is virtually impossible to predict a recovery
rate of an obligor with much certainty. The margins of error are very large indeed. Table III.B.2.1
shows estimated recovery rates and their standard errors for US corporate debt of different
seniority classes. It can be seen that the average standard error is often of the same order of
magnitude as the mean of the recovery rate.

Table III.B.2.1: Recovery rates by seniority of claim

Seniority Observations Mean (%) Standard deviation (%)


Senior secured 82 56.31 23.61
Senior unsecured 225 46.74 25.57
Subordinated 174 35.35 24.64
Junior subordinated 142 35.03 22.09
Total 623 42.15 25.42

Source: Renault and Scaillet (2004) Data set: S&P, 1981–1999, US Corporates

From a risk management point of view the large prediction errors in the recovery rates would not
be too serious if we could at least hope that our estimation errors will cancel out on average over
several defaults. Unfortunately, the systematic dependence of recoveries on the business cycle
destroys this hope. Recoveries depend on a common factor and thus they will not diversify away.
In particular, we will get hit twice in a recession: first, because there are more defaults than usual;
and second, because we will have lower than average recovery rates.

Thus, we should stress the recovery-rate assumptions of our credit risk models when we consider
recession scenarios. This is confirmed in Table III.B.2.2, which shows average recovery rates for
different phases of the business cycle. In the years 1982–2000 (a proxy for a ‘long-term average’)
recoveries tended to be much higher than in the recent (2001 and 2002) downswing, where also
the default incidence has increased significantly. For example, recoveries on senior unsecured
debt dropped from a long-term average of 43.8% to 35.5% and 34.0% in the 2001/02 recession.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 11
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

Table III.B.2.2: Average recovery rates (%) of defaulted debt at different periods in time

Asset class 1982–2002 1982–2000 2001 2002


Secured bank loans 61.6 67.3 64.0 51.0
Equipment trust 40.2 65.9 NA 38.2
Senior secured 53.1 52.1 57.5 48.7
Senior unsecured 37.4 43.8 35.5 34.0
Senior subordinated 32.0 34.6 20.5 26.6
Subordinated 30.4 31.9 15.8 24.4
Junior subordinated 23.6 22.5 NA NA
All bonds 37.2 39.1 34.7 34.3

Source: Moody’s KMV (2003)

A popular mathematical model for random recovery rates is the beta distribution. The beta
distribution is a distribution for random variables with values in [0, 1] which has the density

f ( x ) c ˜ x a (1  x )b , (III.B.2.8)

where a and b are the two parameters of the distribution, and c is a normalisation constant. By
choosing different values for a and b, a large variety of shapes for the recovery distribution can be
reached. One can directly fit the parameters of the beta distribution to the mean and the variance
of the dataset using the following formulae:

µ 2 (1  µ) µ(1  µ)2
a and b .
Ƴ2 Ƴ2

Here, µ is the mean of the data, and Ƴ is its standard deviation. In Figure III.B.2.2, the beta
distributions fitted to the data in Table III.B.2.1 are plotted. Recovery-rate distributions for
senior, senior unsecured, subordinated, junior subordinated, and all debt issues are shown. Quite
clearly, higher seniority classes tend to have higher average recoveries, but there is a large overlap
between the distributions of the various seniority classes. This is caused by the relatively large
variance of the values reported for each class. This can happen even if priority rules of the
seniority classes are observed for each obligor individually.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 12
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

Figure III.B.2.2: Beta distributions fitted to the recovery data of Table III.B.2.1

2.5

1.5

0.5

0
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Senior Secured Senior unsecured Subordinated Junior Subordinated All

III.B.2.7 Conclusion
Modelling credit risk has become an essential tool for modern risk management within a financial
institution. Such models are used to determine both expected loss (important for pricing loans
and other assets or contracts) and unexpected loss (necessary for assessing the appropriate size of
the capital buffer).

The fundamental idea presented in this chapter is that probability of default, exposure amount
and loss given default (or conversely, recovery rates) combine to give the credit loss distribution
for a portfolio of assets. This distribution is typically skewed, with a small probability of large
losses.

Recovery rates are one of the three crucial elements in determining the credit loss distribution.
They will vary according to seniority, level of security, the economic cycle and local bankruptcy
laws, amongst other things. Recovery rates are particularly difficult to estimate in advance,
having large standard deviation and dependence on the business cycle.

References
Altman, E I, Resti, A, and Sirone, A (2001) Analyzing and explaining default recovery rates.
Report submitted to the ISDA, Stern School of Business, New York University, December.

Gupton, G M, Gates, D, and Carty, L V (2001) Bank-loan loss given default, in Enterprise Credit
Risk Using Mark-to-Future. Algorithmics Publications, pp. 69–92. Available from
www.algorithmics.com

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 13
The PRM Handbook – III.B.2 Foundations of Credit Risk Modelling

Renault, O, and Scaillet, O (2004) On the way to recovery: A nonparametric bias-free estimation
of recovery rate densities. Journal of Banking and Finance, 28 (to appear).

Van de Castle, K, Keisman, D, and Yang, R (2001) Suddenly structure mattered: insights into
recoveries from defaulted debt, in Enterprise Credit Risk Using Mark-to-Future. Algorithmics
Publications, pp. 61–68.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 14
The PRM Handbook – III.B.3 Credit Exposure

III.B.3 Credit Exposure


Philipp Schönbucher1

III.B.3.1 Introduction
Chapter III.B.2 established the three components of credit loss upon default of an obligor:
exposure amount, loss given default (or, conversely, recovery rates) and probability of default.
Successful credit risk management and modelling will require an understanding of all three
components. Accordingly, this chapter explores in more detail the exposure amount, which may
also be referred to as credit exposure or exposure at default. The exposure at time t is the amount
that we would lose if the obligor defaulted at time t with zero recovery.

Section III.B.3.2 will distinguish between pre-settlement and settlement risks, as the management
techniques vary depending on whether default occurs prior to or at the time of settlement.
Section III.B.3.3 explains exposure profiles, that is, how exposures vary over time for the various
asset and transaction types. Finally, Section III.B.3.4 discusses some techniques for reducing
credit exposures (risk mitigation techniques) and Section III.B.3.5 concludes.

The exposure is closely related to the recovery rate in the sense that it is usually identified with
the legal claim amount or the book value of the asset. But this identification is not quite correct:
We should also recognise that we might stand to lose value which is not recognised as a legal
claim amount (see the discussion in Section III.B.2.6), for example large future coupon payments.
Thus, the correct definition of exposure is the market price or the replacement value of the claim.

In practice many simplifications are used when exposure amounts are estimated. This is partly
justified by the fact that any accuracy that is gained by a more accurate measurement of exposure
will probably be swamped by the large uncertainty surrounding the recovery rates of the obligors.
Exposure and recovery enter the loss at default multiplicatively. Thus if we decrease the relative
error in the exposure measurement by 5%, it will not help much in improving the loss estimation
if on the other hand we face a relative error of 20% or more in the recovery rate.

Credit exposure may arise from several sources:

1 D-MATH, ETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 1
The PRM Handbook – III.B.3 Credit Exposure

Direct, fixed exposures. These arise from lending to the obligor or from investment in bonds issued
by the obligor (which is another kind of lending). This is the most straightforward type of
exposure.

Commitments. Although they frequently have zero current exposure, committed lines of credit
constitute large potential exposure because they will usually be drawn should the obligor get into
financial difficulties. The bank may have covenants that allow a termination of the lending facility
in the event of an adverse change of the obligor’s credit quality, but the borrower usually has an
information advantage and can draw at least parts of the line of credit before his financial
problems become known to the bank. This raises the question how the potential exposure
embedded in a line of credit should be measured. A common pragmatic solution is to assume
that the obligor will have drawn a certain fraction of the line of credit if he should default. Thus,
we consider a fixed fraction of a committed line of credit as exposure at default, even if it is not
drawn at the moment.

Variable exposures. These arise mainly from over-the-counter (OTC) transactions in derivatives. As
they are exposed to (uncertain) moves in the interest rates, fixed-coupon loans and bonds could
also be considered variable exposures, but they are usually considered fixed exposures. Futures
contracts are generally ignored for the purposes of credit assessment as their institutional features
are designed to effectively eliminate credit exposure. The futures clearing mechanism interposes
the clearing house as the ultimate counterparty to all trades. The clearing house in turn manages
its credit exposure by trading on a fully collateralised basis through the system of initial and daily
margin calls (see Section I.C.6.3.1). While it is theoretically possible that a clearing house might
default on its obligations as counterparty, it has never actually happened and is generally regarded
as a remote possibility.

For OTC transactions, current exposure is defined as the current replacement value of the
relevant derivative contract (after taking netting into account). Difficulties arise when it comes to
future exposures because this involves the projection of the future value of the derivative
conditional on the occurrence of a default while recognising the effects of any netting agreements
which may be in place. The future value of the derivative depends, in turn, on market
movements which cannot be accurately predicted in advance.

For pragmatic reasons, most credit portfolio risk management models currently map future
exposures at default into loan equivalent exposures. In this method, the random future exposure of
an OTC derivatives transaction is mapped into a non-random exposure (possibly time-

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 2
The PRM Handbook – III.B.3 Credit Exposure

dependent). For credit risk assessment, the derivatives transaction is then treated as a loan with
this exposure.

III.B.3.2 Pre-settlement versus Settlement Risk


In Section III.B.3.3 on exposure profiles we shall see that it is not unusual in OTC transactions
for very large cash flows to change hands at the settlement of the transaction, even if the net
value of the transaction is much smaller than these cash flows. Thus it makes sense to distinguish
between the risks that arise specifically at the settlement of these transactions, the settlement risk,
and the pre-settlement risk of the transaction before its maturity. The techniques used for
managing each of these risks vary.

III.B.3.2.1 Pre-settlement Risk


This is the risk that the counterparty to a transaction (e.g. an OTC derivative transaction) defaults
at a date before the maturity (settlement) of the transaction. Here, the existence of early
termination clauses is crucial in the reduction of the exposure due to pre-settlement risk. The
right of early termination is the credit risk equivalent of ‘cutting your losses’ by closing out a loss-
making market exposure. Usually, these clauses are incorporated in the master agreement
between the counterparties. They involve triggers due to
x failure to perform on this or a related contract,
x rating downgrade (usually to a class below investment grade),
x bankruptcy.

Should one of these events occur, the contract is terminated and settled immediately, with final
payment being the replacement value of the contract (i.e. the value of an otherwise identical
contract with a non-defaulted counterparty). If the replacement value of the contract is positive
to the defaulted counterparty, it will receive the final payment. Otherwise, the defaulted
counterparty will have to make the final payment to the non-defaulted counterparty. In the latter
case the defaulted counterparty may default on this final payment, too, which means that the
non-defaulted counterparty will have to enter the bankruptcy proceedings with a legal claim
amount of the final payment. Thus, the pre-settlement exposure is
E(t ) = max{0; Replacement value at time t}. (III.B.3.1)

Thus, the exposure for pre-settlement risk is only the replacement value of the derivative, and
only if this value is positive to us. The exposure of the settlement risk on the other hand is
roughly equivalent to the gross exposure of the transaction.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 3
The PRM Handbook – III.B.3 Credit Exposure

III.B.3.2.2 Settlement Risk


Many financial transactions between two counterparties involve two simultaneous payments or
deliveries – for example, counterparty A delivers a bond and counterparty B delivers the purchase
price for this bond (in a straightforward cash-trading transaction), or one counterparty delivers
USD and the other counterparty delivers EUR (in a spot FX transaction). Similar final payments
are made at the maturity of FX swaps (exchange of principal) or at the maturity of forward
contracts.

Settlement risk arises only at the final settlement of a transaction if there are timing differences
between the two payments of the transaction. For example, an FX transaction may involve a
payment in EUR by bank A, which is made at 9am in London, and a payment in USD by bank B
at the same day, which must be made in New York. Because New York is five hours behind
London, this payment will be made later than the EUR payment. Should bank B default after
receiving bank A’s payment, but before making its own payment, bank A will have to try to
recover its claim in the bankruptcy court.

A famous example of settlement risk is the case of the German bank Herstatt, which on
26 June 1974 had taken sizeable foreign currency receipts in Europe but went bankrupt (it was
shut down for insufficient capital by the German office for banking supervision) at the end of the
German business day before it settled its USD payments in New York.

The effect of settlement risk is that bank A has a very large exposure (the total notional amount of
the transaction) but only for a very short period of time (a few hours). Compared to pre-
settlement risk, the difference in exposure size can be very large. This risk can be mitigated by
improving the clearing and settlement mechanisms, netting agreements to minimise cash flows
(in particular, the cash settlement of price differences instead of physical delivery), or the
introduction of a central clearing house which takes both sides’ payments in escrow.

Other risk management tools for both settlement and pre-settlement risks are explained in
Section III.B.3.4.

III.B.3.3 Exposure Profiles


III.B.3.3.1 Exposure Profiles of Standard Debt Obligations
Standard debt contracts usually have fairly straightforward exposure profiles. The simplest case is
a bullet bond or a bullet loan with a fixed notional amount paid off at maturity of the loan (see
Chapter I.B.2). The top panel of Figure III.B.3.1 shows the exposure profile over time of a ten-
year, 5% coupon loan where we have assumed constant interest rates of 5%.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 4
The PRM Handbook – III.B.3 Credit Exposure

Figure III.B.3.1: Bond exposure profiles


Exposure profile for a ten-year fixed-coupon bond/loan with 100 notional amount,
bullet principal repayment at maturity and an annual, fixed coupon of 5%.
120

100

80

60

40

20

0
0 2 4 6 8 10

Exposure profile for a ten-year amortising loan with 100 notional amount, and annual payments of 12.95.
Interest rates are constant at 5% .
120

100

80

60

40

20

0
0 1 2 3 4 5 6 7 8 9 10

Whenever a payment is received, the exposure drops by the payment amount. This causes the
characteristic ‘sawtooth’ pattern in Figure III.B.3.1 and also in all other exposure profiles for
assets with intermediate payoffs. Between payment dates the exposure increases smoothly. This
reflects the increase in the time-value of the outstanding payments.

For the coupon bond in the top part of Figure III.B.3.1, the largest payment is the final
repayment of principal (with the final coupon), thus the exposure profile remains largely constant
with a large drop in the exposure at maturity. A common approximation is to set the exposure
constant until maturity, that is, to ignore the sawtooth pattern caused by the coupon payments
and to use some average of the exposure level. Essentially this is equivalent to assuming that
defaults only occur in the middle between two coupon payment dates.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 5
The PRM Handbook – III.B.3 Credit Exposure

Not all loans have the full principal repayment at maturity of the transaction. Amortising loans, for
example, spread the principal repayments over the life of the transaction in such a way that the
total payment amount is the same at all payment dates. This yields the downward sloping
exposure profile in the bottom part of Figure III.B.3.1. The exposure profile of amortising loans
is slightly concave because, initially, less of the annual payments goes towards principal
repayments than at later dates.

III.B.3.3.2 Exposure Profiles of Derivatives


OTC derivative contracts such swaps, forward contracts or FX transactions have several special
features that complicate the calculation of the corresponding exposure amounts. Derivatives
often have an initial value of zero (or close to zero). This means that current exposure is a very
bad measure of future exposure, which can vary dramatically with market movements. We must
consider exposure profiles over time and cannot set exposure to a constant value as we did for
fixed coupon bonds.

These exposure profiles are not only time-dependent but also stochastic. Here, assumptions must
be made in order to decide how multiple possible realisations of the exposure are to be captured
by a single number. At future dates, an OTC derivative can have a positive or a negative value.
But by equation (III.B.3.1) the exposure is floored at zero. By definition, exposure cannot
become negative. The exposure is the amount that we would lose if the obligor defaulted. So, for
instance, with an interest-rate swap, at each payment date either we owe the other party money or
the other party owes us money. In the first case our exposure to default of the other party is zero;
in the latter case it is positive. Thus we have to cope with an inherent nonlinearity here. The
volatility of the underlying asset will also enter the calculation as it defines the range of likely
future outcomes for the asset and its derivative.

Derivatives usually involve payments by both contract parties. Thus, it makes a difference
whether the two payments are netted (then the exposure is only to the net value of the
payments), or whether both payment streams are considered in isolation (in which case we are
exposed to the full payment stream of our counterparty).

For the current exposure of a derivative with replacement value D(t) at time t, the starting point
for the calculation of derivatives exposures is
E(t ) = max{0; D(t)}. (III.B.3.2)

If we assume that D(t) can be determined at any time t, there is no inherent difficulty in equation
(III.B.3.2) yet.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 6
The PRM Handbook – III.B.3 Credit Exposure

The difficulties arise when we consider future exposures because exposure is a random variable
and, since we usually measure credit risk over a long time horizon (such as one or five years),
derivatives exposures may change significantly over the time horizon. Put another way, for a
future point in time T > t we cannot predict D(T) with certainty given information at time t, so
the exposure will be stochastic.

The exposure measurement problem is now to find a curve E(t, T) which for all relevant T > t
maps the distribution of the spot exposure E(T) to a single number: the forward-looking
exposure E(t, T) given information at time t. Viewed as a function of time horizon T, the
forward-looking exposure E(t, T) is called the exposure profile of the transaction.

Figure III.B.3.2 illustrates this using the example of an interest-rate swap with a 5% swap rate
from the point of view of the receiver of the floating rate. The surface plot shows the current
exposure (i.e. the positive part of the value of the replacement value of the swap) at different
points in time and for different possible levels of the interest rates. Unfortunately, seeing this
from t = 0, we cannot predict future levels of interest rates, we can only make statements over
the probability distribution of the interest rates at different time horizons. Figure III.B.3.2 shows
the density of the interest rates at t = 2 and t = 10, and equal colours of the mesh correspond to
equal quantiles of the interest-rate distribution. Clearly, uncertainty increases over time, so the
density for t = 10 is wider than the density for t = 2. Because the underlying interest rate is
random, the exposure is random, too: it could take any positive value. The bold blue line in
Figure III.B.3.2 represents the 90% quantile exposure; that is, at any time in the future, the exposure
will be below the blue line with 90% probability. The simplification is now to assume that this
90% point will be the realisation of the exposure, in the hope that this assumption – while clearly
wrong – will at least be conservative.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 7
The PRM Handbook – III.B.3 Credit Exposure

Figure III.B.3.2: Interest-rate swap exposure profile


Exposure distribution and exposure profile for a ten-year interest-rate swap at 5%

Assigning a fixed number to the (stochastic) exposure at time T is a simplification, similar to


assigning a single risk measure such as value at risk (VaR) to the full distribution of a random
variable. But this simplification allows us to map the complex derivative contract to a loan-
equivalent exposure amount, so we can view the derivative as a loan with an admittedly rather
strange amortisation schedule. Nevertheless, this clever mapping allows us to integrate derivatives
contracts into a risk management system which otherwise could only accept loans.

An obvious candidate for an exposure measure is the expected exposure


E(t, T) = Et [(D(t))+], (III.B.3.3)

where (x)+ = max{x, 0} is the common shorthand notation for the positive part of x and Et[·]
denotes the conditional expectation given information up to time t. Expected exposure has the
advantage of being comparatively easy to evaluate. Essentially the problem is reduced to the
calculation of the value of a European call option on D(T) – see Section I.B.5.1. In many cases,
this calculation can be done quite easily. Expected exposure has the additional advantage of being
coherent in the sense of Artzner et al. (1999). That is, expected exposure is:
x non-negative: any derivative with a non-negative replacement value will generate a non-
negative exposure number;
x homogeneous: a positive scaling of the derivative position will result in the same positive
scaling of the exposure measure; and

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 8
The PRM Handbook – III.B.3 Credit Exposure

x subadditive: if you add two (or more) derivatives, the joint expected exposure is less than
the sum of the individual exposures.

Despite its nice properties, it is frequently felt that the expected exposure is not conservative
enough because in many cases the actual exposure at default will be larger than the expected
exposure. An alternative, yielding higher exposure values, is to use a ‘quantile-based’ exposure
measure. These measures are defined very much like VaR levels: the p-quantile exposure at time T is
the level below which the exposure will remain with probability p. We denote this by qp*, so:
Pt (D(T ) d q *p ) = p and E(t, T) = q *p . (III.B.3.4)

Frequently, the price of the derivative is a monotonic function of the underlying asset and the
distribution of the underlying asset is assumed known (e.g. it is lognormal). In this case the p-
quantile exposure of the derivative can be calculated by:
x determining the corresponding quantile in the ‘bad’ direction of the underlying asset,
and
x calculating the value of the derivative at this level of the underlying asset.

This is usually simpler than calculating the expected exposure at the same level. These
calculations have much in common with the calculation of risk measures in market risk
management: expected exposure is roughly equivalent to expected shortfall (see Section
III.A.3.5.2) and quantile-based exposure measurement in very similar to VaR-based measures of
market risk (see Chapter III.A.2). There are some practical difficulties, in that for credit exposure
measurement these numbers must be calculated at several time horizons, and for longer time
horizons, than the ones commonly used in market risk.

Figure III.B.3.3 shows the exposure profile of an interest-rate swap contract (see Chapter I.B.4).
The typical feature of most swap contracts is that they are initially entered at zero exposure: both
sides of the swap have the same value. The potential exposure increases as we look further
forward in time. This so-called ‘diffusion effect’ is caused by the randomness of the underlying
interest rate: the range of possible outcomes increases with time. But as time proceeds, the
number and total notional amount of the remaining payments decrease, so that eventually the
future exposure has to decrease back to zero. From a statistical point of view we can say that the
worst time for the counterparty to default is around seven years into this ten-year swap. The
potential for large credit losses at this point exceeds the potential for large credit losses at any
earlier time because the range of likely market movements is greater. A large and favourable
market movement could mean the loss of a very valuable asset should the counterparty default at

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 9
The PRM Handbook – III.B.3 Credit Exposure

this time. Beyond the seven-year point, the amortisation effect dominates over the diffusion
effect, so potential losses are reduced.

Figure III.B.3.3: 95% exposure profile of an interest-rate swap

160

140

120

100

80

60

40

20

0
0 1 2 3 4 5 6 7 8 9 10

Figure III.B.3.4: 95% exposure profile of an FX swap


120

100

80

60

40

20

0
0 1 2 3 4 5 6 7 8 9 10

Figure III.B.3.4 shows the typical exposure profile of an FX swap. An FX swap differs from an
interest-rate swap in that there are no interim cashflows and therefore no amortisation of risk.
The exposure profile reflects only the diffusion effect; the greater the passage of time, the greater
the potential for a large favourable exchange rate movement. If the counterparty defaults when
the swap is in profit, then a significant asset may be lost. The exposure profile tells us that
statistically, the worst possibly time for default is towards maturity when the potential mark-to-

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 10
The PRM Handbook – III.B.3 Credit Exposure

market value is greatest. Note also that an FX swap, unlike an interest-rate swap, also has a final
exchange of principal. Consequently the settlement risk at maturity is significantly greater.

In Table III.B.3.1 we present a numerical example to illustrate the calculation of the exposure
levels of a fixed-for-floating interest-rate swap from the point of view of the floating-rate receiver
with a fixed swap rate of 5% and a notional of 100. We assume that the interest rates follow a
lognormal random walk with drift zero and volatility 5%, and that the term structure of interest
rates remains flat at all times.

Table III.B.3.1(a) shows the calculation of current exposure for one simulated path of the interest
rates (shown in the column ‘floating’). In years 1 and 2 interest rates rose above the fixed rate of
5% so that the swap has a positive value to us. The value of the swap is calculated by calculating
the value of an annuity that pays a fixed payment of $1 for the remaining life of the swap, and
then multiplying it by the difference between the current interest rate and 5%. (This calculation
yields the value of the net payments if we were to enter an offsetting swap.) We see that the value
of the swap turns negative from year 3 onwards. Thus, the current exposure is zero in those
years, and it is only positive when the swap also has a positive value.

Unfortunately, to carry out the calculations in Table III.B.3.1(a) we need to know the future
development of the interest rates, so it cannot be done at time t = 0 to generate an exposure
profile. The calculations to generate an exposure profile are shown in Table III.B.3.1(b). For the
floating-rate receiver, the exposure is worst (highest) whenever interest rates are high. So, in
order to calculate a 95% quantile exposure, we need to calculate the upper 95% quantiles of the
interest rates; that is, for each year we calculate those levels of interest rates which are not
exceeded with 95% probability. These levels we can calculate already at time t = 0; they are
shown in the second column. The rest of the exposure calculation now proceeds as for the
calculation of current exposure: we calculate the values of the annuities for these interest rates,
and then the value of the swap. As the value of the swap is always positive to us, this is also the
exposure in these scenarios. Again, we see the characteristic shape of a hump-shaped exposure
profile similar to the one in Figure III.B.3.3.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 11
The PRM Handbook – III.B.3 Credit Exposure

Table III.B.3.1: Exposure calculations for an interest-rate swap


(a) Current exposure (simulated interest-rate path)
Time Floating Fixed Value of Value of Current
annuity swap exposure
0 5.00% 5% 7.77 0.00 0.00
1 5.30% 5% 7.06 2.12 2.12
2 5.17% 5% 6.47 1.12 1.12
3 4.61% 5% 5.91 -2.31 0.00
4 4.55% 5% 5.19 -2.35 0.00
5 4.74% 5% 4.40 -1.13 0.00
6 4.61% 5% 3.61 -1.40 0.00
7 4.13% 5% 2.79 -2.42 0.00
8 3.99% 5% 1.90 -1.92 0.00
9 3.88% 5% 0.97 -1.09 0.00
10 4.06% 5% 0.00 0.00 0.00

(b) 95% quantile exposure profile


Time Floating: Fixed Value of Exposure
upper annuity (value of swap at
95% quantile 95% quantile)
0 5.00% 5% 7.77 0.00
1 5.42% 5% 7.03 2.96
2 6.08% 5% 6.24 6.71
3 6.98% 5% 5.44 10.77
4 8.19% 5% 4.64 14.80
5 9.78% 5% 3.86 18.44
6 11.87% 5% 3.09 21.23
7 14.63% 5% 2.34 22.53
8 18.27% 5% 1.60 21.24
9 23.13% 5% 0.84 15.27
10 29.62% 5% 0.00 0.00

III.B.3.4 Mitigation of Exposures


Whenever possible, both counterparties to an OTC derivatives transaction should engage in
exposure-minimising agreements. This is a win–win situation for both sides because with well-
designed agreements, counterparty exposure (and thus credit risk) can be significantly reduced
without needing economic capital and without large additional costs to the counterparties. In this
section we consider the problem of aggregating all exposures to a particular counterparty, where
netting and other mitigation agreements may be in place.

Note that for the calculation of the total exposure profile with respect to a counterparty we have
to consider a whole portfolio of derivatives transactions. This usually means that we will no
longer be able to calculate expected exposures with closed-form solutions for European options
on the underlying transaction, nor will we be able to identify the ‘critical values’ for quantile-
based exposures, as these values will now be a surface in a multidimensional risk space. Thus, the

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 12
The PRM Handbook – III.B.3 Credit Exposure

calculation of exposure profiles is reduced to Monte Carlo simulations or other numerical


methods (see Chapter II.G).

III.B.3.4.1 Netting Agreements


To illustrate the magnitude of the benefits of netting agreements, one need only look at the latest
OTC derivatives market statistics released by the BIS in Table III.B.3.2. The presence of netting
agreements reduces the gross credit exposure of all outstanding OTC derivatives contracts to
28% of the total market value of these contracts. This risk-reduction effect can be even stronger
when the two counterparties are involved in a large number of transactions – for example, if they
are both market makers in certain OTC markets, or for a very active trader/hedge fund with his
prime broker. If the transactions are ‘hedged’ transactions (i.e. delta-hedged derivatives positions)
then netting can eliminate almost all exposure.

Table III.B.3.2: The effect of netting on global OTC derivatives exposure (USD bn)

Total outstanding notional amounts 197,177


Total market value of outstanding contracts 6,987
Gross credit exposure after netting 1,986
Source: BIS market statistics, End of December 2003.

A necessary requirement before netting can be applied is the existence of a legally watertight
netting agreement, which is usually embedded in a master agreement between the two
counterparties.2 A master agreement is a contract that sets the framework under which the
counterparties can undertake derivatives transactions. Each individual transaction is economically
independent, but it is governed by the same master agreement. Legally, all transactions are part of
this one master contract. Thus, the master agreement can specify rules that apply across several
transactions.

The form of netting advocated by ISDA is called bilateral close-out netting: Once a credit event
occurs on one of the counterparties, all transactions under the netting agreement are closed out.
Closing out means that the current market value (replacement value) of all transactions is
determined, then these numbers are netted, and then the net amount becomes immediately due.

Suppose counterparty A has entered swaps with counterparty B with current market values (to A)
of +34, 97, +45 and +5. Then the net value of these four transactions is 13. If A defaulted, B

2 The International Swaps and Derivatives Association (ISDA) was and is engaged in proposing and promoting
legislation to enable netting agreements, and such legislation has been adopted in most OECD countries. More legal
background information can be found on the ISDA’s web site (www.isda.org); see also Werlen and Flanagan (2002).

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 13
The PRM Handbook – III.B.3 Credit Exposure

would have a claim of 13 against A with which it would go to the bankruptcy court. At an
assumed recovery rate of 40%, B would suffer a loss of 7.8. If counterparty B defaulted, then A
would pay the net value 13 to B and would not have any further obligations. In this case, A does
not have any immediate credit exposure to B, thanks to the existence of the netting agreement.

If, on the other hand, there is no bilateral close-out netting agreement, then B would have claims
of 97 against A which B would have to try and recover in bankruptcy court, while B would still
have to perform on the other three contracts with a total market value of 84. Thus, B’s loss
would be significantly larger: at 40% recovery B would lose 58.2. The situation without netting is
also bad to A: while A had no net exposure with netting, without netting A does have significant
losses. A would have to perform on the –97 swap but would have to try and recover some of the
total value of 84 of the other three contracts. The credit losses to A would be 50.4.

The reason for the efficacy of netting is that it prevents cherry-picking by the administrator of
the defaulted counterparty. In many legislations, the administrator (receiver/bankruptcy court)
can decide only to enforce contracts which are beneficial to the defaulted entity while sending all
other contracts and obligations to the bankruptcy courts. The introduction of a master agreement
ensures that all swap transactions are viewed as one contract, which can only be accepted or
rejected as whole.

From a risk manager’s point of view, the existence of a netting agreement with a certain
counterparty allows him to aggregate all exposures with that counterparty and to consider only
the net exposure that arises from these transactions. He is able to calculate an exposure profile
with respect to the counterparty and not just with respect to an individual transaction.

III.B.3.4.2 Collateral
Apart from netting agreements, collateral is another popular instrument to mitigate counterparty
risk, in particular if one counterparty (e.g. a hedge fund) has a significantly more likely to default
than the other (e.g. an investment bank). The International Swaps and Derivatives Association
carries out regular surveys on the use of collateral and estimates that at the beginning of 2004,
around USD 1017bn of collateral were used in OTC derivatives transactions. Comparing this to
the USD 1986bn of credit exposure after netting found by the BIS, we see that about half of the
non-netted credit exposure is managed by collateral. Another, less obvious form of
collateralisation arises when derivatives are embedded into bonds or notes, as for example in
credit-linked notes or equity-linked notes. By buying the note, the investor implicitly posts full
collateral for all possible exposures that may arise from the derivative which is embedded in the

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 14
The PRM Handbook – III.B.3 Credit Exposure

note. This explains why this is a very popular structure for derivatives transactions with retail
customers.

Like netting, collateral also requires a corresponding legal document (again the ISDA publishes
template contracts on this topic), but here the legal difficulties are less severe as collateral is a
very old means of mitigating credit risk.

Generally, as in netting agreements, the idea is to reduce credit risk by reducing exposure. In the
case of collateral, the credit risk of the counterparty is enhanced by the credit risk of the
collateral. The lender only suffers a loss if both counterparty and collateral default.

Let us assume that hedge fund A and investment bank B want to enter an OTC derivatives
transaction such as an interest-rate swap. To mitigate A’s credit risk, the parties enter a collateral
agreement which specifies which assets may be delivered by A as collateral, and under which
conditions B may ask for collateral, including the amount of the collateral. Typical collateral
assets are cash (used in about 70% of cases) and government securities (about 15% of cases), but
other assets are possible (e.g. equities). As the collateral may decrease in market value just when
the counterparty defaults, non-cash collateral is not counted at its full face value but at less; that
is, it is reduced by a ‘haircut’ factor depending on the volatility of the collateral and its correlation
with the underlying exposure.

Over the life of the swap, A will have to deliver the required amount of collateral assets to B. The
assets are legally still the property of A but they are under administration by B. If the collateral
becomes insufficient (e.g. because of market movements or because of changes in B’s credit
rating), B issues a margin call asking A to post additional collateral. If there is excess collateral, A
may remove the collateral from the collateral account. If at the end of the transaction A has not
defaulted, all unused collateral is returned to A.

If on the other hand counterparty A misses a payment of the swap at some point, then B is
entitled to sell some of the collateral assets to make the payment to himself. If an early
termination event occurs because of a default of A, then B is allowed to sell the full collateral in
the market and use the proceeds to cover the replacement value of the contract. If there are any
remaining proceeds from the sale of the collateral, these are returned to A. If the collateral was
not sufficient to cover the replacement value of the derivative, the remaining value will have to
be recovered in the usual way.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 15
The PRM Handbook – III.B.3 Credit Exposure

For exposure measurement, the value of the collateral must be deducted from the replacement
value at risk at every time horizon. If the value of the collateral itself is random this introduces an
additional level of complexity, but for the most common case of cash collateral we can simply
subtract the collateral amount from the exposure value.

Collateral management is the non-trivial task of keeping track of both the collateral a business has to
post, and the collateral it should receive. The institution must ensure that it can always provide
sufficient collateral to support its transactions, even if adverse market movements or a credit
downgrade suddenly increase the collateral requirements. Thus, a collateral manager also has to
monitor the credit quality of his own institution very closely. There are many famous examples of
the failure of collateral management policy, including the downfall of the Long-Term Capital
Management hedge fund.

III.B.3.4.3 Other Counterparty Risk Mitigation Instruments


Limits. Counterparty exposure limits are not a mitigation instrument. Exposure limits are
counterparty risk management devices that are used to avoid undue counterparty risk
concentrations with respect to any particular counterparty. They are also used to avoid exposure
to potential counterparties that are deemed to be insufficiently creditworthy; in such cases a limit
is not granted, preventing any trades or lending activity which might result in credit exposure. In
most financial institutions the credit committee is responsible for determining whether the
institution is willing to expose itself to performance risk for any particular
counterparty/borrower, and to what extent. The existence/size of a limit will either be
determined on the basis of its own analysis, or can effectively be outsourced by relying on ratings
from ratings agencies. Non-financial corporations also establish counterparty limits, but are
more likely to rely on ratings agencies for credit assessments. Separate limits should apply for
settlement and pre-settlement risk.

Termination rights/credit puts. One or both counterparties may reserve the right to terminate the
transaction should the credit risk of the other party worsen significantly. This allows the recovery
of the exposure before the actual default event at a higher (often full) recovery value.

Establishing third-party guarantees: A traditional and popular way to mitigate credit risk is to establish
a guarantee for the exposure from a third party. This is very similar to collateralisation but its
effect is to replace the default probability (as opposed to the exposure) of the counterparty with
the combined default probability of the counterparty and the guarantor.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 16
The PRM Handbook – III.B.3 Credit Exposure

Credit derivatives. Counterparty exposures can also be managed with credit derivatives (see
Chapter I.B.6). Although it is technically possible to specify a credit derivative which pays off
exactly the exposure at default of a specific counterparty, this is usually not done in practice for
two reasons: first, this would entail a disclosure of the transactions that have been made with the
counterparty; and second, because of the unusual nature of the payoff, the credit protection
would be rather expensive. Nevertheless if one is willing to accept some residual exposure to the
counterparty, a significant part of the counterparty exposure can be laid off using single-name
credit default swap contracts.

All counterparty risk mitigation instruments necessarily introduce legal risk and documentation risk.
Collateral calls may not be made or fulfilled in time, and netting agreements and guarantees may
be legally challenged. These risks are hard to quantify and must be carefully monitored.
Furthermore, these risk mitigation instruments also have a cost in terms of administration costs,
which must be justified by the reduction in risk. More details on credit risk management
instruments can be found in Chapter III.B.6.

References
Artzner, P, Delbaen, F, Eber, J-M, and Heath, D (1999) ‘Coherent measures of risk’, Mathematical
Finance, 9(3), pp. 203–228.

Werlen, T J, and Flanagan, SM (2002) ‘The 2002 Model Netting Act: A solution for insolvency
uncertainty’, Butterworths Journal of International Banking and Finance Law, April, pp. 154–164.

Copyright © 2004 P. Schönbucher and The Professional Risk Managers’ International Association 17
The PRM Handbook – III.B.4 Default and Credit Migration

III.B.4 Default and Credit Migration


Philipp J. Schönbucher1

Having covered recovery rates and exposures in the previous chapters, this chapter discusses the
modelling and measurement of the third determinant of default loss: the probabilities of default of
individual companies (and, by extension, sovereign nations). After setting up a framework to
describe and analyse default probabilities we will consider three different credit risk assessment
methods: agency ratings, internal ratings and market-implied default probabilities. Finally, we
compare the three approaches and discuss their differences.

III.B.4.1 Default Probabilities and Term Structures of Default


Rates
In this section we introduce the terminology for describing default probabilities. The starting
point of any representation of default probabilities is the one-period default probability depicted
in Figure III.B.4.1.

Figure III.B.4.1: A one-step default tree

Starting from node S0 at time T0, there are two possible outcomes at time T1 in Figure III.B.4.1:
default (node D1) which is reached with the default probability p1; and survival (node S1) which is
reached with the survival probability, 1 – p1. Apart from the direct specification of the default
probability p1 (or the survival probability 1 – p1), we could also specify the odds of default. The
‘odds’ of an event are defined as the ratio of the probability of the event (i.e. the default
probability) to the probability that the event does not occur (i.e. the survival probability):

1 D-Math, ETH Zurich, Switzerland.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 1
The PRM Handbook – III.B.4 Default and Credit Migration

p1
H1 : . (III.B.4.1)
1  p1

Hence, given the odds of an event, one can recover the probability of the event as:
H1
p1 .
1  H1

This definition differs only slightly from the ‘odds’ that are routinely quoted by bookmakers on
sports (and other) events. In fact, bookmakers usually quote 1/H1 and not H1. We can interpret
H1 to be the odds of a fair bet on the event, in that if you bet $1 on the event that the obligor
defaults then:
• you lose your $1 if the obligor survives, or
• you get 1/H1 if the obligor does indeed default.

The expected payoff of this bet is 1 u  p  pƈ  , so the bet is fair.

From a modelling point of view the advantage of using odds is mostly of a technical nature:
probabilities are restricted to lie between 0 and 1, while odds can take any value between 0 and
infinity. For small values (such as for short-horizon default probabilities), odds and probabilities
are almost equal, with odds being slightly larger. For example, if the default probability is p1 =
2%, then the ‘odds’ of default are H1 = 2.0408%.

Figure III.B.4.2: A schematic representation of default and survival over time

A complication in the representation of default probabilities arises when several points in time
are considered. Figure III.B.4.2 shows a simple representation of default and survival over time in
a model with three periods ([T0, T1], [T1, T2] and [T2, T3]) Over each period [Ti–1, Ti], the obligor
can default (with probability pi) or survive (with probability 1 – pi).

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 2
The PRM Handbook – III.B.4 Default and Credit Migration

By considering the individual periods in isolation, we can define ‘local’ default probabilities (and
local odds of default) as before. So, we can still analyse the situation in period [T1, T2] with
default probability p2 and odds of default H2 = p2/(1 – p2). But these quantities are now conditional
default probabilities, that is, they are conditional upon survival until T1. They are only valid if we
have reached the node S1. If a default has already occurred in the first period, then p2 and H2
have no meaning. These conditional default probabilities are also often called marginal default
probabilities.

If we now want to know the cumulative default probability until T2, that is, the probability of default
at any point in time over [T0, T2], then we must take several possible paths across the tree into
account, which all can lead to a default during the period [T0, T2]:

• the obligor survives the first period ( S0 o S ) and defaults in the second period S1 o D ,
and

• the obligor defaults in the first period ( S0 o D ).

The first scenario has probability (1  p1 ) p2 and the second scenario has probability p1, so that

the total default probability over [T0, T2] equals (1  p1 ) p2  p1 p2  p1  p1 p2 .

If we project further into the future it quickly becomes more convenient to consider cumulative
survival probabilities because for survival we only have to consider one path across the tree. The
cumulative survival probability over [T0, T3], for example, is given by the product of the marginal
survival probabilities over the individual periods which are spanned by the interval [T0, T3], that
is,
P[Survival over [T0, T3]]= 1  p1 1  p2 1  p3 P T0 , T3 . (III.B.4.2)

Clearly, the cumulative default probability over [T0, T3] equals one minus the cumulative survival
probability over [T0, T3]. The general formula for the probability of Si o S j , that is, of going

from survival in Ti to survival in T j with i < j, is

P ( Ti , T j ) (1  pi 1 )(1  pi  2 )(1  p j ).

Starting from the tree in Figure III.B.4.2, we can specify default probabilities (or odds of default)
for future time periods, that is, we can specify a term structure of default probabilities. In many
cases it may be desirable to increase the resolution of the term structure by inserting additional
points in time, for example, going from yearly to quarterly, monthly or even daily intervals as this

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 3
The PRM Handbook – III.B.4 Default and Credit Migration

will allow us to place the nodes of our tree exactly on the payment dates of the bonds or loans of
the obligor under consideration.

As the length of the time periods gets smaller, local default probabilities and odds of default
should also decrease. For example, suppose we have an obligor with a 10% default probability
over one year. To be consistent with this when moving to monthly time periods, we should
assume that the monthly default probability is 0.874%. The constraint becomes stronger as we
take smaller steps. For instance, when calculating the daily default probability that is equivalent to
a 10% one-year default probability, even a small overestimation of the daily probability can lead
to a one-year default probability that is much too large. In summary, to avoid strange limiting
behaviour as the time period decreases we must reduce the term default probabilities in a
coordinated way.

A common assumption that always achieves a stable balance when time-steps are reduced is that
the odds of default over a small time interval [Ti, Ti+1] are approximately proportional to the length
of the time interval. Let us denote the length of the time interval by ƅ = Ti+1 – Ti. Then the
hypothesis may be written:
ƈi
Hi ƅ u ƫ Ɣi Ưr ƫ Ti , (III.B.4.3)
ƅ

where the proportionality factor is denoted by ƫ Ti . Now, if we take the limit as the time

interval gets smaller and smaller (i.e., as ƅ o 0 ), we leave ƫ Ti unchanged. But the odds H i
of default get smaller, too. ƫ T is the default probability per unit of time, evaluated at the
gridpoint which corresponds to time T. This quantity is known as the default rate, the default
intensity or the default hazard rate at time T; it is the ‘instantaneous’ probability of default in a
continuous time setting.

Suppose that, somehow, we are able to specify a default hazard rate function O(t) for all t • 0.
Armed with this function we can calculate survival and default probabilities from 0 until a given
time horizon T. If we then let the interval length ƅ go to zero, it can be shown that eventually the
survival probability will be

^
P (0, T ) exp  ³
T

0 `
ƫ t dt , (III.B.4.4)

and the probability of a default between time 0 and time T will thus be 1 – P(0, T).

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 4
The PRM Handbook – III.B.4 Default and Credit Migration

Conversely, suppose we are given a term structure of survival probabilities, that is to say, a set of
probabilities P (0, T ) for different time horizons T. Then we could compute the corresponding
default intensity function by differentiation. In fact:
w
ƫ( T ) ln P (0, T ). (III.B.4.5)
wT

So, there is a correspondence between the term structure of survival probabilities (or default
probabilities) and the default hazard rate function. We may, for instance, try to estimate a term
structure of default probabilities using one of the methods described later in the chapter.
Whatever the shape of this term structure – and market-implied term structures can have quite
strange shapes – it is usually possible to find a default hazard rate function that reproduces this
shape.2

An important special case arises when the default hazard rate is constant, an assumption that is at
the foundations of the CreditRisk+ model (see Chapter III.B.5). In this case, let us write the
hazard rate as ƫ0Then by equation (III.B.4.4) the survival probabilities are just:
P ( t ) exp ^ ƫ 0T ` . (III.B.4.6)

This gives a simple and effective way to interpolate default probabilities between different dates.

For example, suppose we choose ƫ0 = 10.53%. Since by (III.B.4.6), ƫ 0 ln( P (1)) , the one-year

survival probability is P (1) exp{ ƫ 0T } 90% so the one-year default probability is 10%. Also,
assumption (III.B.4.3) gives the six-month default probability as 5.13%. Similarly, over one
month the default probability will be 0.87% and over one day it will be 0.029%.

III.B.4.2 Credit Ratings


The goal of any credit rating system is the accurate assessment of the credit risk of the obligor.
Credit ratings are one of the most important tools to assess the likelihood that an obligor defaults
and their use is actively encouraged by the new capital adequacy rules for credit risk proposed by
the Basel Committee on Banking Supervision. The aim of a credit rating procedure is the
accurate classification of obligors according to their credit quality, usually by specifying an
estimate of the obligor’s default probability or by giving them a ‘letter rating’.

2The only exception are term structures of survival probabilities which have jumps or which drop to zero.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 5
The PRM Handbook – III.B.4 Default and Credit Migration

Large public rating agencies such as Standard and Poor’s, Moody’s and Fitch classify obligors
into a number of rating classes.3 For example, Standard and Poor’s use the classes AAA, AA, A,
BBB, BB, B, CCC, D, with the interpretation that ‘lower’ classes (such as CCC) carry a higher risk
of default over a given time horizon than higher classes (AAA or AA).

A classification into a finite set of classes is not necessarily the only output of a credit rating
system. Many other systems directly produce estimates of the default probability of an obligor
which can vary on a continuous scale, essentially from 0% to 100%. Such systems are typically
based upon a quantitative and statistical model of the default risk of the obligor. Examples are
KMV’s ‘expected default frequencies’ (see Chapter III.B.5) or Kamakura’s default probabilities;
most proprietary ‘internal’ ratings models also fall into this class.

III.B.4.2.1 Measuring Rating Accuracy


There are many different approaches to the problem of determining a credit rating, and it is
important to have a methodology to judge the accuracy of the output of the rating procedure.
Unfortunately, a credit rating can be correct but still unlucky (it correctly classifies obligors, but
some ‘good ones’ default and the ‘bad ones’ do not). Conversely, it can be incorrect, but lucky.
Distinguishing between these two possibilities can be difficult.

When classifying rating systems one might be tempted to say that the best rating system is the
one that produces the ‘true’ default probability. Unfortunately, the concept of the ‘true’ default
probability is very elusive: it depends on the information that is available, and it has no direct
relation to observable quantities.

Imagine we had access to an ideal rating system, let us call it ‘Crystal Ball’. Crystal Ball only needs
two classes to accurately classify all obligors: D and S. An obligor is classified as D if it will
default, and as S if it will survive, and our Crystal Ball can do this without error. Having Crystal
Ball, we no longer need default probabilities. There are only two trivial values for default
probabilities: 0% (for the S class) and 100% (for the D class). Essentially, these are the only ‘true’
default probabilities. In reality we do not have a Crystal Ball, so we specify default probabilities

3 Moody’s bought KMV and formed Moody’s KMV in 2003. Today, Moody’s KMV provides both
classical Moody’s ‘letter’ ratings and the ‘expected default frequency’ (EDF) ratings of KMV. As these two
rating systems are fundamentally different, we will refer to Moody’s ratings when the letter ratings of
Moody’s KMV are meant, and we will refer to KMV ratings for the EDF equity-based default risk
measures calculated according to the KMV methodology.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 6
The PRM Handbook – III.B.4 Default and Credit Migration

and/or intermediate rating classes to measure the degree of our confidence about the fact that
the obligor truly belongs to class S or class D.

A common method to compare different rating models is to focus on their ability to accurately
rank obligors according to their credit quality. This method ignores any possible concrete values
that are given for the default probabilities, which can be an advantage if the system does not
directly give us such probabilities but only qualitative rankings like the letter ratings of the public
rating agencies.

Figure III.B.4.3 shows a cumulative accuracy plot (CAP) with which the accuracy of the risk ranking
of rating models can be compared. The plot is constructed as follows. First, the obligors are
ranked according to the default risk that the model assigns to them, starting with those with the
highest default risk. The rank numbers will form the x-axis of the plot. The CAP now shows, for
each number/percentile x, what percentage y% of the actually defaulted obligors can be found
among the x% worst obligors according to the model’s ranking.

Figure III.B.4.3: A cumulative accuracy plot

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 7
The PRM Handbook – III.B.4 Default and Credit Migration

For example, the marked point of the blue CAP is found as follows. The basic data set contained
100 obligors, of which 10 defaulted during the next year. We now use our credit model to classify
these 100 obligors based upon the data available at the beginning of the year and rank the
obligors in order of decreasing credit risk (according to the model’s prediction). Next to that, we
also keep track of the actual default/survival behaviour of the obligors. The marked value in the
graph is at an x-value of 19, which means that we must consider the 19 ‘worst’ obligors according
to the model. Out of these 19 obligors, the 5th, 6th, 7th, 13th and 16th actually did default, so
that we have captured 50% (5 out of 10) of the actual defaulters in these 19 worst obligors. Thus,
the y-value at x = 19 is y = 50%. The same calculation is done for every x between 0 and 100.
Clearly, any CAP plot must start at 0% (an empty set of zero obligors cannot contain any
defaulters) and it must end at 100% (if you have all obligors, you also have all defaulters).

The CAP of the Crystal Ball model is the yellow line, and shows the best possible default
prediction accuracy. The ten worst obligors of the Crystal Ball model are also the ten obligors
that default in reality. Thus, the accuracy profile shows a very steep increase up to 100%
explained (correctly classified) obligors already at the 10th ranked obligor. Beyond that, the CAP
for the Crystal Ball model remains flat because there is nothing more to explain.

The other extreme is a ‘random’ model in which the obligors are ranked completely at random,
without any reference to their actual credit quality. With such an approach, we can expect to find
50% of the actual defaulters in the 50% (randomly chosen) ‘worst’ obligors, 10% of the actual
defaulters in the 10% ‘worst’, etc., because any randomly chosen subset of x obligors will on
average contain 10x/100 defaulters. Thus, the proportion of actual defaulters should be
proportional to the proportion of the number of obligors selected, and this purely random credit
risk ranking produces the diagonal line shown in pink in Figure III.B.4.3.

A good credit risk model will exhibit a CAP profile that is as close to the yellow line of the
perfect model and as far away from the pink line of the purely random model as possible. It will
concentrate the defaulters at the high risk scores at the beginning of the ranking, so that the slope
of its CAP will be steep initially. The closer the CAP is to the yellow line, the better it is. The dark
blue line in Figure III.B.4.3 shows the cumulative accuracy profile of an average default
prediction model. If we compare two different rating models and the CAP of one model is
consistently above the CAP of the other, then this is a sign of superior classification ability for
the first model and it will be considered the better model.

A CAP gives a very good visual impression of the predictive performance of a credit risk model.
Nevertheless, it is often useful to reduce the CAP to a single number, the accuracy ratio (AR). The

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 8
The PRM Handbook – III.B.4 Default and Credit Migration

AR is the ratio of the area between the CAP of the credit risk model under consideration and the
random model’s CAP (the diagonal), and the area between the perfect prediction model’s CAP
(Crystal Ball) and the diagonal. (Here the x-axis is now also a percentage scale.)

The area between the ideal model and the random model is easily found to be (1 – D)/2, where
D is the fraction of defaulted obligors. Thus the accuracy ratio of any given model with CAP
function CAP(x) is:
1

AR
³0
CAP ( x )dx  21
. (III.B.4.7)
1
2 (1  D )

An accuracy ratio close to 1 means that the model is almost as good as the ‘ideal’ model, while an
accuracy ratio of 0 means that it is as bad as a purely random classification. A negative accuracy
ratio is possible: it means that the model does an even worse job than a purely random credit
ranking and that the model’s ranking should be inverted.

The CAP and the AR only measure the model’s ability to rank obligors, they say nothing about
the model’s ability to give correct values for the probability of default. This is an advantage when
we want to compare models that do not necessarily give us numerical probabilities of default at
all (like the ‘letter’ ratings of public rating agencies) or if we want to use the model as a support
tool for ‘yes/no’ loan decisions.

Yet, theoretically, a model which gives completely absurd values for the default probabilities may
still have an almost perfect-looking CAP. As an extreme example, consider a model that assigns a
default probability of 50.1% to the first ten obligors (the ones who do default in the end) and a
default probability of 49.9% to the other 90 obligors. This model correctly ranks the obligors,
thus it has a perfect CAP, but the assignment of almost 50% default probability to all obligors is
strongly invalidated by the actual experience of only 10 defaults: 10 or fewer defaults out of 100
obligors would have a probability of essentially zero ( | 10 17 ) if the individual default
probabilities were indeed at 50%.4 Despite this extreme example, in practical applications we
may expect a model that does a good job in ranking the obligors also to provide good estimates
for the default probabilities. We simply have to check that aspect of the model also.

4This is the value if defaults are independent. But even with reasonable default correlation, we will be able to strongly
reject the hypothesis that the default probabilities equal 50%.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 9
The PRM Handbook – III.B.4 Default and Credit Migration

III.B.4.3 Agency Ratings


III.B.4.3.1 Methodology
The core business of public rating agencies such as Standard and Poor’s, Moody’s and Fitch is
the analysis of issuers of debt instruments, in terms of their credit quality and their ability to pay
their investors. This analysis is summarised in a rating classification into one of several rating
classes: Aaa to C for Moody’s, and AAA to C for Standard and Poor’s and Fitch. Agency rating
classifications are publicly available and are an important factor driving the decisions of many
potential investors in these bonds.

Issuing a bond (instead of a loan) has the advantage that the issuer is able to borrow many small
amounts from a large number of investors simultaneously. These investors will also be able to
trade the bond on a secondary market. Unfortunately, a thorough analysis of an issuer’s credit
quality is usually too expensive compared to the small investment amount of most individual
investors. This might prevent them from investing in the bond in the first place, and the issuance
of the bond may fail. The secondary market for the bond will also suffer from the same problem.
But if the credit research is carried out by an independent and trustworthy agency which acts for
all bond investors and which makes the findings of the research public, a duplication of research
effort is avoided. Investors can invest many small amounts in many different bond issues and
thus diversify their risk without spending unfeasible amounts on research. Hence a recognised
public credit rating is almost indispensable for issuance in modern bond markets. Indeed, the
cost of the rating is usually paid for by the issuer of the bond.

Because of their crucial position as one of the most important sources of information to
investors, rating agencies often gain access to information that may be unavailable to ordinary
investors. This information includes such things as direct interviews with the issuing firm’s
management, internal planning, research and budgeting numbers, and the accounting data of the
rated firm. These data are summarised by a credit analyst who then – possibly with the aid of
proprietary statistical models – assigns a rating to the issuer and the bond issue. Naturally, rating
agencies are rather secretive about the precise methodology used and the weightings of the
factors that influence a rating. There is a portion of human judgement involved, but a surprisingly
large fraction of the credit ratings can be reproduced using purely statistical models.

Public agency ratings are revised and updated at regular intervals to reflect changes in the credit
quality of the rated obligor. Here, rating agencies try to strike a balance between rating stability
and rating accuracy. As the rating is meant to represent a long-term view, the effects of general
business cycle variations should average out. This ‘rating through the cycle’ policy is not

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 10
The PRM Handbook – III.B.4 Default and Credit Migration

uncontroversial because it can lead to delays in rating adjustments. It will also cause empirical
difficulties when we try to back out estimates for default probabilities from agency ratings.

The Standard and Poor’s ratings from AA to CCC shown in Table III.B.4.1 may be modified by
the addition of a plus or minus sign to show relative standing within the major rating categories.
Obligors rated BB, B, CCC, and CC are regarded as speculative-grade investments, obligors rated
BBB and better are regarded as investment-grade.

Besides the classical, ‘fundamental’ ratings, quantitative credit rating providers have more recently
entered the market using proprietary quantitative statistical models to produce an output which
can be more directly interpreted as a ‘probability of default’. Examples are KMV’s expected default
frequencies and Kamakura’s default probability estimates. These ratings typically are ‘point-in-time’
ratings that do not attempt to smoothen business cycle effects.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 11
The PRM Handbook – III.B.4 Default and Credit Migration

Table III.B.4.1 Standard and Poor’s long-term issuer credit ratings definitions

• An obligor rated AAA has extremely strong capacity to meet its financial commitments. AAA
is the highest issuer credit rating assigned by Standard and Poor’s.
• An obligor rated AA has very strong capacity to meet its financial commitments. It differs
from the highest-rated obligors only in small degree.
• An obligor rated A has strong capacity to meet its financial commitments but is somewhat
more susceptible to the adverse effects of changes in circumstances and economic
conditions than obligors in higher-rated categories.
• An obligor rated BBB has adequate capacity to meet its financial commitments. However,
adverse economic conditions or changing circumstances are more likely to lead to a
weakened capacity of the obligor to meet its financial commitments.
• An obligor rated BB is less vulnerable in the near term than other lower-rated obligors.
However, it faces major ongoing uncertainties and exposure to adverse business, financial,
or economic conditions which could lead to the obligor’s inadequate capacity to meet its
financial commitments.
• An obligor rated B is more vulnerable than the obligors rated BB, but the obligor currently
has the capacity to meet its financial commitments. Adverse business, financial, or
economic conditions will likely impair the obligor’s capacity or willingness to meet its
financial commitments.
• An obligor rated CCC is currently vulnerable, and is dependent upon favourable business,
financial, and economic conditions to meet its financial commitments.
• An obligor rated CC is currently highly vulnerable.

Source: Standard & Poor’s.

III.B.4.3.2 Transition Matrices, Default Probabilities and Credit Migration


We have seen that public rating agencies only use ‘letter’ ratings in order to classify obligors
according to their credit quality. This may be adequate to make relative comparisons and maybe
intuitive buy/hold/sell investment decisions, but in order to be able to use these ratings in a
quantitative risk management system we need to map the letter ratings to numbers, to default
probabilities. Essentially, we face the problem of backing out what the rating actually means in
terms of default probability; the verbal definitions as they are given in Table III.B.4.1 are not
sufficient for this.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 12
The PRM Handbook – III.B.4 Default and Credit Migration

Table III.B.4.2: Standard and Poor’s one-year average rating transition frequencies,5
1981–1991 (percentages)

AAA AA A BBB BB B CCC D


AAA 89.10 9.63 0.78 0.19 0.30 0 0 0
AA 0.86 90.10 7.47 0.99 0.29 0.29 0 0
A 0.09 2.91 88.94 6.49 1.01 0.45 0 0.09
BBB 0.06 0.43 6.56 84.27 6.44 1.60 0.18 0.45
BB 0.04 0.22 0.79 7.19 77.64 10.43 1.27 2.41
B 0 0.19 0.31 0.66 5.17 82.46 4.35 6.85
CCC 0 0 1.16 1.16 2.03 7.54 64.93 23.19
D 0 0 0 0 0 0 0 100

Rating agencies may be secretive about their methodology, but fortunately they publish a lot of
historical data about both rating transitions and defaults of rated obligors. A typical summary of
such data published by Standard & Poor’s is the transition matrix shown in Table III.B.4.2; similar
transition matrices are also regularly published by other agencies (see, for example, Hamilton et
al., 2002). In Table III.B.4.2 we have eliminated the ‘not rated’ class and added the D rating class
for defaulted obligors in the last row, using the assumption that a defaulted obligor remains in
the default class with 100% probability.

Each row of a transition matrix gives the historical rating transition frequencies for the obligors
of the corresponding rating class. For example, according to the BBB row of Table III.B.4.2, on
average 0.06% of all BBB-rated obligors were upgraded to AAA in the course of one year, 0.43%
were upgraded to AA, 6.56% were upgraded to A, 84.27% did not change their rating, 6.44%
were downgraded to BB, 1.60% were downgraded to B, 0.18% were downgraded to CCC, and
0.45% defaulted.

A first glance at Table III.B.4.2 already confirms some stylised facts about rating transitions.
First, default frequencies decrease with increasing rating classes, thus the ratings do indeed reflect
an ordering according to default probability. Second, the frequencies of unchanged ratings (i.e.
the values on the diagonal) are very high; this is an indicator of the ‘rating stability’ aimed for by
the agencies.

5 The ‘no rating’ category has been eliminated.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 13
The PRM Handbook – III.B.4 Default and Credit Migration

As they stand, the numbers given in Table III.B.4.2 are not probabilities, they are the realised
frequencies of the transitions over the observation period. Thus, a zero entry (as in the AAȺD cell)
does not mean that AA-rated obligors have a zero probability of default, it only means that no
AA-rated obligor defaulted in the years 1981–1991. Generally, small transition frequencies are
usually based upon a very small number of observations, so that we have to expect
inconsistencies like zero probabilities. Also, realised frequencies can be non-monotonic. For
instance, in Table III.B.4.2 the CCC-rated obligors had a higher upgrade frequency to A than BB-
rated obligors: 1.16% for C-rated obligors as opposed to only 0.79% for B-rated obligors! Clearly,
transition frequencies can only be considered as noisy estimates of the true transition
probabilities.

In order to use the information given in the transition matrix to calculate transition probabilities
we represent the transition table as a matrix P ( Pij ) , 1 d i , j d k , where the entry Pij in the ith

row and the jth column represents the transition probability from class i to class j over the time
interval, and k is the total number of rating classes (including default). Having added the default
class D row ensures that the transition probability matrix is actually a square matrix. Now, to
calculate transition probabilities, we need the following two assumptions:
• Time-invariance. The probability of a transition from rating class i at time t to rating class j at
time T > t does not depend on the calendar dates t and T but only depends on time via
the length T – t of the time interval.
• Markov property. Besides the length of the time interval, the probability of a transition from
rating class i to rating class j only depends on the rating class that we come from (class i )
and the rating class that we go to (class j ), and on no other external variables.

The assumption of time-invariance is not an uncommon assumption in statistical analysis.


Essentially it says that the future will be like the past. Although common, it is very restrictive as it
rules out phenomena such as business cycle effects. Empirically, downgrades are very much more
likely in recessions than in boom phases. The second assumption, the Markov property, is also
restrictive. It essentially rules out the use of any information beyond the current rating class – all
we need to know in order to determine future transition probabilities is the current rating class.
Besides disallowing other explanatory variables beyond the rating itself, the Markov property also
implies that the history of the obligor’s rating is irrelevant as long as the current rating is known.
This is in contradiction to empirically observed rating momentum. Recently downgraded obligors
are much more likely to be downgraded again than other obligors of the same rating which have
already been in that rating class for a long time. However, the Markov property implies an
absence of rating momentum.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 14
The PRM Handbook – III.B.4 Default and Credit Migration

Both assumptions thus contradict empirical observation. In practice, the inaccuracies of these
assumptions have to be weighed against the significant simplifications that they allow: under
time-invariance and the Markov property we can speak of ‘the’ one-year transition probability
matrix P from which we can calculate the two-year and longer-horizon transition probability
matrices. For example, the two-period transition probability matrix is reached as follows.

The probability of going from rating class A to rating class BB over two periods equals the sum
of the following probabilities:

• p( A o AAA ) p( AAA o B) , the probability of going from A to AAA in the first year times the

probability of going from AAA to BB in the second year,

• + p( A o AA ) p( AA oB) , the probability of going from A to AA in the first year times the

probability of going from AA to BB in the second year,


• +…

• + p( A o D ) p( D oB ) , the probability of going from A to D in the first year times the

probability of going from D to BB in the second year.

Essentially, we need to sum over all possible paths that the rating can take from A at t = 0 to BB
at t = 2.

¦
K
Mathematically, pij( 2 ) n 1
pin pnj and the two-year transition matrix is therefore given by

P( 2 ) P u P
that is, the matrix product of the one-year transition matrix with itself. Note that we need time-
invariance so that the transition probabilities in the second year are the same as the ones in the
first year, and the Markov property is used when we calculate the transition probabilities for the
second year (i.e. we just multiply the transition probabilities, irrespective of whether we multiply
two downgrade probabilities, or a downgrade and an upgrade probability). Similarly, the
transition matrix over n years is obtained from the one-year transition matrix by multiplication n
times with itself:

P( n ) Pn . (III.B.4.8)

In summary, assuming time-invariance and the Markov property means that the one-year
transition data given by the rating agencies can be used to calculate the transition probabilities
(and default probabilities) for all time horizons, by a simple matrix multiplication.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 15
The PRM Handbook – III.B.4 Default and Credit Migration

If transition probabilities are sought for shorter time horizons than the one-year horizon given by
the original transition matrix P, one may try to find a short-term (say, monthly) transition matrix
A, that is consistent with P, or in other words a 1/n-period transition matrix such that A n P.
In many cases the problem of finding such a short-term transition matrix usually does not
possess an exact solution, but for most transition matrices found in practice highly accurate
approximate solutions can be found numerically.

III.B.4.4 Credit Scoring and Internal Rating Models


Because the large public credit rating agencies have traditionally concentrated on larger, US-based
corporate bond issuers, the set of obligors covered by public credit rating agencies is only a
fraction (albeit an important one) of all obligors that make up a bank’s credit portfolio. Important
missing categories of obligors are small and medium-sized businesses, many larger European,
Japanese and Asian obligors, and of course the whole retail portfolio.

In order to assess the credit risk of such obligors, various statistical methods have been
developed. Generally, when it comes to statistical models of default prediction, the choice of the
inputs seems to be more important than the choice of the particular methodology, although both are
frequently intertwined. Important variables that have been found to drive the default behaviour
of obligors include:
• balance-sheet data capturing the indebtedness of the obligor;
• profits and free cash flows capturing the ability to pay;
• the riskiness (volatility) of the business;
• (if available) market data, such as the firm’s market capitalisation;
• macro-economic data capturing the effects of the business environment on the obligor.

III.B.4.4.1 Credit Scoring

One of the earliest published credit scoring model goes back to Altman (1968), and was further
developed in Altman et al. (1977). Credit scoring models usually rely on accounting ratios like the
ones listed in Table III.B.4.3.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 16
The PRM Handbook – III.B.4 Default and Credit Migration

Table III.B.4.3: Key accounting ratios


Some key accounting ratios, and the averages of these ratios over the firms in the
data set used by Altman (1968). ‘Bankrupt’ firms defaulted within one year.
Accounting Ratio Bankrupt Non-bankrupt
Working Capital / Total Assets X1 –6.10% 41.40%
Retained Earnings / Total Assets X2 –62.10% 35.50%
Earnings before Interest and Taxes/ Total Assets X3 –31.80% 15.30%
Market Capitalisation / Debt X4 40.10% 247.70%
Sales / Total Assets X5 150% 190%

Altman proposes to calculate for each obligor the score function


Z(i ) 1.2 u X i  1.4 u X  i  3.3 u X  i  0.6 u X  i  1.0 u X  i  (III.B.4.9)

the value of which is the so-called Z-score of the ith obligor. The variables X 1( i ) ,..., X 5( i ) are the
values of the accounting ratios for the ith obligor. Table III.B.4.3 shows the definitions of these
ratios and their averages in Altman’s data set. Altman proposes to use the Z-score to make loan
decisions. If the score function has a value less than the cutoff score of 1.8 then the obligor is
likely to default and a loan is to be denied. The score can also be used to rank obligors: a higher
score is a sign of better credit quality.

The variables that are used to explain the default behaviour of the obligor are similar for most
internal credit scoring models. Indeed, accounting ratios are typically included to measure such
things as:

x indebtedness (for instance, using the ratio of market capitalisation to debt),

x cash flow available for debt service (for instance, via earnings before interest and taxes),

x profitability (for instance, using the retained earnings to total asset ratio),

as well as numbers to measure earnings stability and the debt service load.

The scoring weights (1.2, 1.4, 3.3, 0.6, 1.0) and the cutoff level of 1.8 in (III.B.4.9) were estimated
using a statistical method. The ‘training set’ lists the values of each of the variables Xi for a set of
obligors and, of course, the information whether the obligors eventually defaulted or whether
they survived. When choosing the training set it is important to take care that it is diverse, that it
is representative of the real world, and that it contains a sufficient number of defaulted obligors. In
particular, one has to be very careful if only internal data of a bank is used to estimate a scoring

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 17
The PRM Handbook – III.B.4 Default and Credit Migration

model because this data set will only contain obligors who have already been pre-selected
according to the existing lending criteria. If, for example, the existing criteria are very strict
regarding earnings before interest and taxes (EBIT), then the internal data will contain very few
obligors with bad EBIT numbers and the scoring model may mistakenly conclude that EBIT was
not relevant to default prediction.

Figure III.B.4.4: An example of credit scoring


Defaulted obligors are shown as red points, obligors which survived as blue points.
The green line shows the points where the score is exactly at the cutoff level.

The scoring weights and the cutoff level in (III.B.4.9) are chosen to maximise the number of
correctly classified obligors. Figure III.B.4.4shows the principle for the case with only two
accounting ratios. The red cloud of points are the defaulted obligors in the training set, the blue
cloud are the surviving obligors. It is not surprising that we have more and more survivors, the
further we go into the ‘northeast’ direction of higher earnings and lower indebtedness. For a
given cutoff level, the score weights define a line in this graph which consists of the points that
have a score of exactly the cutoff score. In Figure III.B.4.4, the green line shows this line for the
optimal cutoff. Any points above that line will have a higher score than the cutoff (and thus will
be accepted), any points below it will have a lower score (and will be rejected).

III.B.4.4.2 Estimation of the Probability of Default


The two most common models for the estimation of default probabilities are logit and probit
models. In the probit model, it is assumed that the default probability of obligor i can be written
as:

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 18
The PRM Handbook – III.B.4 Default and Credit Migration

pi Ɩ Ƣ  Ƣ u X  i    Ƣ N u X N i  (III.B.4.10)

where Ɩ ˜ is the cumulative normal distribution function, Ƣn are parameters that are estimated

statistically, and X n( i ) are given accounting ratios and other explanatory variables for obligor i,
with n = 1, …, N.

The structure of equation (III.B.4.10) is very similar to a scoring model like (III.B.4.9).
Essentially, we are mapping from the Z-scores to default probabilities by using the cumulative
normal distribution function. However, note that the scores have a different interpretation in the
two models:

x In the probit model (III.B.4.10) high scores are mapped to high default probabilities.

x In a scoring model such as (III.B.4.9) high scores are mapped to low default risk.

Thus high scores are ‘good’ in the scoring model, but ‘bad’ in the probit model. The estimated
parameters of the two models are therefore quite different, even when the same explanatory
variables used in the two models.

One advantage of probit models is that we can tell a story about how defaults occur in this set-
up. Let us define the credit index for the ith obligor as

 Ƣ  Ƣ u X i    Ƣ N u X N i  ƥ i  (III.B.4.11)

where ƥ( i ) is a standard normally distributed noise component. If we assume that obligor i


defaults if his credit index drops below zero, then the default probability of obligor i is exactly
equal to the pi given by the probit model. Thus, there is a natural link from probit models to
credit portfolio models like CreditMetrics (see Chapter III.B.5).

The logit model differs from the probit model only in that it does not use the cumulative normal
distribution function to map from scores to default probabilities but uses a different function, the
logistic transformation. Here, the default probability of obligor i is set equal to
1
pi  (III.B.4.12)
1  exp{Ƣ  Ƣ u X i    Ƣ N u X N i `

The logit model is slightly easier to estimate than the probit, but generally results from logit and
probit models do not differ much.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 19
The PRM Handbook – III.B.4 Default and Credit Migration

III.B.4.4.3 Other Methods to Determine the Probability of Default


Besides the rating models presented above, a number of other models are used to assess the
credit quality of individual obligors. The most important of these is the KMV model. This is
explained in detail in Section III.B.5.6.

In a gross simplification, the KMV model can also be viewed as a scoring model where the
‘accounting ratio’ is the distance-to-default. But, this approach is special because the distance-to-
default contains both market information (the market capitalisation of the firm) and volatility
information. Both market and volatility information are usually not found in classical accounting
ratios. From a purely practical point of view, the large historical database underlying the model is
valuable, in particular the empirical transformation that KMV uses to reach a default probability
estimate for a given distance-to-default.

Another class of potentially powerful models for credit classification are neural networks and
other expert systems from artificial intelligence research. Generally, such models have been found
to be equally powerful compared to good statistical models (like logit or probit models), but their
acceptance in practice has been hindered by their complex nature, which is essentially a ‘black
box’ to the credit officer.

III.B.4.5 Market-Implied Default Probabilities


The credit default tree presented in Figure III.B.4.2 is a perfectly adequate model to price simple
credit-sensitive instruments such as corporate bonds or credit default swaps (CDSs), provided
that the necessary conditional default probabilities are already specified. In this section we will
turn the model around. Instead of determining prices for given set of parameters, we now determine
parameters for the prices of a set of benchmark securities, the calibration securities. The default
probabilities that are reached in this calibration exercise are called market-implied default probabilities.

This approach rests on several assumptions. First, we assume that there is information to be
found in the prices. The prices of the defaultable bonds/CDSs must be meaningful, that is to say,
liquid and not unduly affected by external factors beyond default risk (such as taxes). Also, the
peculiarities of different markets enable the participants to incorporate their information into the
prices to different degrees. For example, while it is easy to short credit risk in the CDS market, it
can be difficult to short defaultable bonds. Therefore, calibration securities should be taken from
markets with similar conditions.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 20
The PRM Handbook – III.B.4 Default and Credit Migration

Second, it is necessary that all calibration instruments are subject to the same type of credit risk,
that is, they reference the same obligor (and have the same default definition in the case of
CDSs), and they have the same recovery rate and seniority class in default. We also need to know
the value of the recovery rate (or at least its average).

Third, we use the risk-neutral pricing paradigm, that is, we price a security by taking the
discounted expected value of its payoffs. Thus, the probabilities that we will reach are pricing or
martingale measure probabilities. It is important to realise that these are different from historical
probabilities because they are loaded with risk premia. This issue will be discussed in the next
subsection.

Finally, we assume that movements of risk-free interest rates and defaults are independent. This
is mostly a technical assumption, which significantly simplifies the analysis. In normal situations
this correlation has only a second-order effect on the resulting default probabilities.

III.B.4.5.1 Pricing the Calibration Securities


As a running example in this section we consider the problem of calibrating a term structure of
default probabilities to the bond prices shown in Table III.B.4.4.

Table III.B.4.4: Calibration securities


Prices of five corporate bonds issued by Daimler Chrysler NA Holding. Trade date: 17 November 2003, effective
date 19 November 2003. Coupons are paid annually, currency is USD, notional is 100.
No. Dirty Price Coupon Maturity
1 105.46 4.5 03-01-2005
2 106 5.75 23-06-2005
3 105.27 4.62 10-03-2006
4 100.84 3.75 02-10-2006
5 109.46 5.62 16-01-2007

We can represent the prices of the calibration securities with two elementary types of
(hypothetical) building-block securities: defaultable zero-coupon bonds which only pay in
survival, and a recovery security which pays at the time of default. Specifically, we denote by
B (0, T ) the value at time t = 0 of receiving $1 at time T in survival (and nothing if default occurs
before T), and by E(0, T ) the value at time t = 0 of receiving $1 at default, if the default occurs
before time T.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 21
The PRM Handbook – III.B.4 Default and Credit Migration

The value of a defaultable coupon bond with coupon payment dates Tk, k = 1, …, K, coupon
amount c and recovery rate R can now be written as6
V Bond cB (0, T1 )  cB (0, T2 )    (1  c )B (0, TK )  R ˜ E(0, TK ). (III.B.4.13)

Here, the first summation represents the value of the promised coupon payments and the final
repayment of principal, and the final term represents the value of the recovery received at default,
with R denoting the recovery rate.

For example, the first bond in Table III.B.4.4 has coupon payments of c = $2.25 at T1 = 3
January 2004 and T2 = 3 January 2005, and the principal repayment of $100 at T2. Furthermore,
we assume that the recovery rate is 40%; thus we will have an additional payoff of R = 40 at
default, if a default occurs before T2 .

Similarly, we can represent the prices of the other four bonds as discounted values of the
principal, coupons and recovery cash flows. For all bonds together, we have promised cash flows
on the following payment dates: 3 January 2005 16 January 2005, 10 March 2005, 23 June 2005, 2
October 2005, 19 November 2005, and 3 January 2006. These are the maturities of the
defaultable zero-coupon bonds that we need to represent the cash flows of our calibration
securities.
Figure III.B.4.5: The term structure of risk-free interest rates
USD forward Libor rates on 17 November 2003.

6.00%

5.00%

4.00%
Forward Rates

3.00%

2.00%

1.00%

0.00%
0 1 2 3 4 5 6 7 8 9 10
Maturity

6We ignore day count conventions and other technical adjustments. Notional amounts are normalised to 1.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 22
The PRM Handbook – III.B.4 Default and Credit Migration

The value of a protection-buyer position in a CDS on the reference credit with CDS rate s and
the same payment dates Tk, k = 1, ... , K, is:

V CDS
s B(0, T1 )  ...  B(0, TN )  (1  R ) u E  TK  (III.B.4.14)

Here, the first sum represents the value of the fee stream (a liability to the protection buyer), and
the last term represents the value of receiving 1 – R at default of the obligor. The market CDS
rate is chosen such that the value of the CDS position is zero:
E(0, TK )
s (1  R ) . (III.B.4.15)
B(0, T1 )  ...  B(0, TN )

Having reduced the pricing problems to the problem of finding prices for B̄(0, T) and E(0, T), we
now have to represent these prices in terms of the survival probabilities defined in Section
III.B.4.1. The price of a defaultable zero coupon bond is easily seen to be
B (0, T ) B(0, T ) ˜ P (0, T ) , (III.B.4.16)

where B(0,T) is the price of a default-free zero-coupon bond with maturity T, and P(0,T) the
survival probability until T. It can also be shown that:
E 0, T p1B 0, T1  p2 P 0, T1 B 0, T2  ...  pk P 0, Tk 1 B 0, Tk . (III.B.4.17)

Here, pk P (0, Tk 1 ) represents the probability of surviving until time Tk–1 (which is P (0, Tk 1 ) ),
and then defaulting in the period [Tk–1, Tk] (which occurs with probability pk). This takes us to the
default branch at time Tk. Then, after discounting with B(0, Tk ) , we reach the formula above.

Example III.B.4.2
In the example given above, we use the USD forward Libor curve as the risk-free interest rates.
This yields the prices of the default-free zero-coupon bonds B(0, T ) at the payment dates of the
calibration bonds. In equations (III.B.4.7) and (III.B.4.8) we also need default and survival
probabilities for several different time horizons. Here we made the assumption that these
probabilities are given by a constant default-intensity model as in Section III.B.4.1, that is, that
P ( T ) exp{ ƫ 0T } , where ƫ0 is a parameter that we need to find. Finally, we assume a common
recovery rate of 40%.

Given all these assumptions, the only remaining degree of freedom that we can use to match
model prices to market prices is the default intensity ƫ0. By using Excel’s Solver routine we find
that a value of ƫ0 = 1.0029% minimises the squared pricing errors of the bonds’ model prices

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 23
The PRM Handbook – III.B.4 Default and Credit Migration

relative to the market prices. The results are presented in Table III.B.4.5. The default payoffs row
shows the values of the potential recovery payoffs. The survival payoffs row shows the values of
the promised coupon and principal payments of the bonds. The sum of these two yields the
model price of the bond.

Table III.B.4.5: Model prices of the calibration securities


The model prices of the bonds of Table III.B.4.4 under the assumption of a constant default intensity
ƫ0 = 1.0029%, a recovery rate of 40% and using the default-free interest rates shown in Figure III.B.4.5.

Coupon 4.5 5.75 4.62 3.75 5.62


Maturity 03-01-2005 23-06-2005 10-03-2006 02-10-2006 16-01-2007
Market Price 105.46 106 105.27 100.84 109.46
Calibration Model Prices
Default Payoff 0.4482 0.6289 0.8966 1.1020 1.2055
Survival Payoffs 105.0228 105.3767 104.5247 99.2955 108.5792
Model Price 105.4710 106.0056 105.4212 100.3975 109.7847

We have introduced a variety of ways to represent default probabilities in Section III.B.4.1.


Similarly, there are several ways to represent the prices of defaultable securities. A particularly
simple representation can be reached for the fair CDS rate (III.B.4.15) if the ‘odds’ of default Hk
are used:
s (1  R ) u wH    w K H K  (III.B.4.18)

That is, the CDS rate s equals the loss on default (1 – R) times a weighted average of the odds
of default Hk where the weights of the average are
B(0, Tk )
wk .
B(0, T1 )  ...  B(0, TK )

Clearly, these weights are non-negative and sum to 1.

In the particularly simple case of constant odds of default (i.e. if all Hk take the same value H) we
have
s =(1 – R)H. (III.B.4.19)

Hence, if we equate the odds of default with the default hazard rate (a very accurate
approximation) we can say that the CDS rate equals the loss given default times the default hazard rate.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 24
The PRM Handbook – III.B.4 Default and Credit Migration

III.B.4.5.2 Calculating implied default probabilities


Backing out an implied default hazard rate from a single CDS quote is very straightforward, given

equation (III.B.4.19). If we observe a CDS spread ŝ in the market, then the corresponding
implied default hazard rate is reached by solving equation (III.B.4.19) for the hazard rate:

Hˆ sˆ /(1  R ).

Example III.B.4.3
In the case of Daimler-Chrysler, the quote for a CDS with five years’ maturity on 17 November
2003 was 108.13bp. According to the formula given above, the implied default hazard rate is
1.8% (at an assumed recovery rate of 40%). It is not unusual that implied default intensities from
CDSs are higher than the corresponding implied default intensities from bond prices. This effect
is partly due to the fact that the embedded delivery option of a CDS makes the recovery rate with
a CDS smaller than the recovery rate of a bond, and partly caused by market imperfections in the
bond markets.

In a general situation, we may have CDS quotes for several maturities or we may have market
prices for different bonds with different maturities and different coupons. In order to find a set
of implied default probabilities that is consistent with these prices and that simultaneously looks
‘sensible’, a numerical optimisation routine must be used. This procedure is quite similar to the
bootstrapping procedures used to back out term structures of interest rates from default-free
bond prices. More details can be found in Schönbucher (2003). Generally, apart from introducing
a time-dependence in the default probabilities, the results are qualitatively similar to the simple
result for the single CDS. That is, credit spreads are approximately equal to the default hazard
rate times the loss given default.

III.B.4.6 Credit Rating and Credit Spreads


In order to compare implied default probabilities and historical default probabilities, it is
instructive to study Figure III.B.4.6. In particular, let us take the year 1997. For that year, the dark
blue line shows an average credit spread for US Baa-rated corporate bonds of 0.65%, which will
yield an implied default hazard rate of 1.08% (assuming a recovery rate of 40%). This spread is
measured as spread over Aaa-rated corporate bonds and not as spread over treasuries (in order to
avoid tax and liquidity premia, which would be present in the treasury market).7

7Choosing Aaa instead of risk-free only makes our estimate more conservative.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 25
The PRM Handbook – III.B.4 Default and Credit Migration

The spread is a compensation for the default risk that we bear if we invest in Baa-rated bonds, so
it is interesting to see if this compensation was adequate. If the realised default hazard rate equals
the implied default hazard rate, then an investment in Baa-rated bonds will just break even. This
is what we would expect to happen with risk-neutral investors and in the absence of market
imperfections. We now investigate how the implied default intensities compare, historically, to
actual default rates, and whether the compensation through credit spread was adequate for the
credit risk incurred.

In fact, it turns out that the compensation is much more than adequate. Imagine we had bought
the total Baa-rated corporate bonds market (or an equally weighted fraction of it) in the year 1997
and held it to maturity – let us call this portfolio ‘Baa97’. The value of the yellow line in 1997 tells
us the rate of defaults that we would have suffered over one year, that is, 1997–1998. Clearly, this
rate of defaults is almost zero!

So, what if we had to hold the Baa97 portfolio for three years starting in 1997?8 The blue line
tells us that over 1997–2000 the annual default rate of the Baa97 bond portfolio was only 0.3%.
This is still much less than the implied default rate, and even less than the spread.

Even over a five-year horizon starting 1997 the historical default rate is only about 0.55% p.a.(see
the value of the yellow line in 1997), much below the implied default rate. This situation repeats
consistently without exception over the whole period from 1976 to 1997 (1997 is the last year in
which we could calculate a five-year forward-looking default rate).

Hence, investing in Baa portfolios would always and without exception have outperformed an
investment in the risk-free reference portfolio (which here is Aaa-rated bonds).

8We want to hold until maturity in order to eliminate effects due to market price movements.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 26
The PRM Handbook – III.B.4 Default and Credit Migration

Figure III.B.4.6: Implied default probabilities vs. historical default frequencies


This figure shows, for the pool of Baa-rated US corporate bonds: implied default rates (black, for 40% recovery),
credit spreads (dark blue, measured as spreads over Aaa, not as spreads over treasury), the default rate of the pool
over the next year (yellow), over the next three years (blue), and over the next five years (green).

4.00

3.50

3.00
Implied Default Probability (% p.a.);
Actual Default Rates (% p.a.)

2.50

2.00

1.50

1.00

0.50

0.00
1975 1980 1985 1990 1995 2000

ImplDefIntens 1Y forw 3Y forw 5Y forw Baa Spread

Other studies also indicate that there seems to be a significant discrepancy between implied
default rates and historical default rates. Typically, implied default rates tend to be larger by a
factor of 2–3. This situation has been termed the spread premium puzzle: why are spreads so much
higher than seems to be justified by their actual credit risk component? Or is this an arbitrage
opportunity?

Several explanations have been put forward.

x First, there is risk aversion. An investment in Baa97 will lose money if a recession
comes, but a recession is exactly the situation in which the average investor needs
money. Therefore, the investor will demand a higher return in order to be
compensated for this wrong-sided risk.

x Secondly, maybe the actual default risk was much higher than historical incidence, we
just did not experience the truly bad scenario, but the possibility of this bad scenario
was nevertheless priced into the spreads.

x Third, maybe there were tax effects at work that made the Baa-rated bonds
unattractive, or liquidity premia were demanded for investments in Baa97.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 27
The PRM Handbook – III.B.4 Default and Credit Migration

Unfortunately, none of these explanations can fully explain the effect. Risk aversion is certainly
present, but risk premia can never lead to a shift in spreads that amounts to a virtual arbitrage
opportunity: that would be inconsistent with rational investor behaviour because even the most
risk-averse investor could have picked up a seemingly risk-free excess return here.

The second explanation concerning the unrealised ‘Armageddon’ scenario also cannot explain the
size of the effect. If this mysterious extremely bad scenario were to explain a sizeable proportion
of the spreads, then it must also have a probability that is not too small compared to the
individual default probability of an obligor. But if this is the case, why has this scenario never
occurred so far?

The liquidity argument is certainly valid if spreads over treasuries are considered. But here we are
considering spreads over Aaa corporates which should have similar liquidity problems to Baa-
rated bonds.

Finally, the tax argument again does not hold for spreads over Aaa, it only has the possibility of
being relevant if spreads over treasuries are considered, because treasuries are exempt from state
taxes in the USA. Besides, the spread premium puzzle is also observed in markets which are not
affected by US tax rules, for example in the markets for Eurobonds or for bonds of non-US
issuers.

So the spread-premium puzzle remains a puzzle. Maybe it is indeed a market imperfection. But
maybe it has already disappeared: since mid-2003 spreads have decreased significantly compared
to the spreads used in Figure III.B.4.6. It is quite possible that the market has finally reached a
level where implied and actual default risk are approximately equal, or at least in a more realistic
relationship to each other. Of course, we will only know this for sure after it is too late to make
an investment decision.

III.B.4.7 Summary
Credit rating and the estimation and measurement of default probabilities are a classical problem
of credit analysis, and one that never seems to be perfectly solvable. The classical solution to this
problem is to rely on the rating assessment of an external rating agency – essentially, this is
reliance on expert advice. We have seen how these rating classifications can be translated into
concrete numbers for default and survival probabilities over different time horizons.

Recent advances in computing power and (more importantly) the increasing availability of the
necessary data in electronic form have made it possible to estimate default probabilities on a

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 28
The PRM Handbook – III.B.4 Default and Credit Migration

purely statistical basis. In many cases, such quantitative approaches are able to compete
successfully with agency ratings, and frequently they are the only option when no agency rating is
available.

Finally, we discussed methods to imply default probabilities from observed market prices of
traded credit-sensitive instruments such as bonds and credit default swaps. While these methods
are indispensable to assess the risk compensation that one should get for the type of credit risk
under consideration, spread-implied probabilities usually differ significantly and systematically
from statistical and historical default rates. This credit-spread puzzle remains an open question to
date. Nevertheless, because they are systematically above historical default rates, spread-implied
probabilities may be useful as very conservative estimates of default probabilities.

In practice, implied probabilities are the correct probabilities to use for pricing applications,
because these probabilities already contain the risk premia that are paid for the credit risk
contained in the calibration securities. Risk management, capital allocation and value-at-risk
calculations, on the other hand, require historical probabilities because here the preferences and
risk aversion can be added later on.

References
Altman, E (1968) Financial ratios, discriminant analysis and the prediction of corporate
bankruptcy. Journal of Finance, 23(4), pp. 589–609.

Altman, E, Haldeman, R, and Narayanan, P (1977) Zeta analysis: a new model to identify
bankruptcy risk of corporations. Journal of Banking and Finance, 1, pp. 31–54.

Hamilton, D T, Cantor, R, and Ou, S (2002) Default and recovery rates of corporate bond
issuers. Special comment, Moody’s Investor Service Global Credit Research, February.

Schönbucher, P J (2003) Credit Derivatives Pricing Models. Chichester: Wiley.

Copyright © 2004 P. Schönbucher and the Professional Risk Managers’ International Association 29
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

III.B.5 Portfolio Models of Credit Loss


Michel Crouhy, Dan Galai and Robert Mark1

This chapter describes the main approaches to the modelling of credit risk in a portfolio context
(credit value-at-risk), i.e. the credit migration approach, the contingent claim or structural
approach, and the actuarial approach. It reviews the assumptions of the credit portfolio models
and the pros and cons of each approach. Finally, it discusses the relationship between credit
value-at-risk, economic capital and regulatory capital.

III.B.5.1 Introduction
In this chapter we review the main approaches to modelling credit risk. For each approach we
explain the basic logic behind it, describe the data required and evaluate its strengths and
weaknesses. The interested reader can find a more detailed description of the approaches in
Crouhy et al. (2001).

A bank should be concerned with the estimation of the risk of default of a specific creditor, since
this is the basis for pricing a loan and charging the borrower with the appropriate interest rate.
But, at the same time, the bank should be looking at the quality of its loan portfolio as a whole,
since the stability of the bank depends to a large extent on the performance of its portfolio, and
on the size of credit-related losses in the portfolio in a given period. Portfolio analysis may in
turn affect the pricing of individual loans and the lending decision as each asset’s contribution to
portfolio risk must be considered.

Modelling credit risk and pricing risky loans or bonds is a complicated task. The factors that
affect credit risk are many and varied. Some factors are exogenous or economy-wide, such as the
level of interest rates and the growth rate of the economy. Other factors are endogenous, such as
the business risk of the firm, its capital structure, and the flexibility of its production technology.
A major consideration is whether to evaluate credit risk as a discrete event, and concentrate only
on the potential default event, or whether to analyse the dynamics of the debt value and the
associated credit spread, and to estimate its risk over the whole time interval to its maturity.
Another important issue is the data sources that are available in order to assess credit risk. Can
the analyst rely on accounting data, or are these too stale and subject to manipulation? To what
extent are market data available, and then, to what extent are the markets efficient enough to
convey reliable information?

1 Michel Crouhy is a Partner at Black Diamond, Dan Galai is a Professor at the Hebrew University and Principal at
Sigma P.C.M., and Robert Mark is CEO of Black Diamond.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 1
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Before proceeding further, let us define some fundamental concepts. Default, in theory, occurs
when the asset value falls below the value of the firm’s liabilities (Merton, 1974). Default,
however, is distinct from bankruptcy. Bankruptcy describes the situation in which the firm is
liquidated, and the proceeds from the asset sale are distributed to the various claim holders
according to pre-specified priority rules. Default, on the other hand, is usually defined as the
event that a firm misses a payment on a coupon and/or the reimbursement of principal at debt
maturity. Cross-default clauses on debt contracts are such that when the firm misses a single
payment on a debt, it is declared in default on all its obligations.

Since the early 1980s, Chapter 11 regulation in the United States has protected firms in default
and helped to maintain them as going concerns during a period in which they attempt to
restructure their activities and their financial structure. Figure III.B.5.1 compares the number of
bankruptcies to the number of defaults during the period from 1973 to 2004. The data are for
North American public companies, but note that legal procedures in enforcing the bankruptcy
procedures in the case of a default event vary quite substantially across jurisdictions (see J.P.
Morgan, 1997, Appendix G).

Figure III.B.5.1: Bankruptcies and defaults in North American public companies

1973Q1 to 2004Q1

140

120

100

80

60

40

20

0
1973

1974

1975

1976

1978

1979

1980

1981

1983

1984

1985

1986

1988

1989

1990

1991

1993

1994

1995

1996

1998

1999

2000

2001

2003

Bankruptcies Defaults

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 2
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Over the last few years, a number of new approaches to credit risk modelling have been made
public. The CreditMetrics approach (which was initiated by J.P.Morgan and was spun off to
RiskMetrics Inc.) is based on the analysis of credit migration, i.e. the probability of moving from
one credit grade to another, including default, within a given time horizon which is usually one
year. CreditMetrics estimates the full, one-year forward distribution of the values of any bond or
loan portfolio, where the changes in values are related to credit migration only. The past
migration history of thousands of rated bonds is assumed to accurately describe the probability of
migration in the next period. The credit migration framework is reviewed in Section III.B.5.3.

Tom Wilson (1997a, 1997b) proposes an improvement to the credit migration approach,
CreditPortfolioView, by allowing default probabilities to vary with the credit cycle. In this
approach, default probabilities are a function of macro-variables such as unemployment, the level
of interest rates, the growth rate in the economy, government expenses and foreign exchange
rates. These macro-variables are the factors which, to a large extent, drive credit cycles. This
methodology is reviewed in Section III.B.5.4.

The structural approach to modelling portfolio credit risk offers an alternative to the credit
migration approach. Here, the economic value of default is presented as a put option on the
value of the firm’s assets. The contingent claim approach in introduced in Section III.B.5.5

KMV Corporation, a firm that specialises in credit risk analysis, has developed a credit risk
methodology and extensive database to assess default probabilities and the loss distribution
related to both default and migration risks. KMV’s methodology differs from CreditMetrics in
that it relies upon the ‘expected default frequency’ for each issuer, rather than upon the average
historical transition frequencies produced by the rating agencies for each credit class. The KMV
approach is based on the asset value model originally proposed by Merton (1974). KMV’s
methodology, together with the contingent claim approach to measuring credit risk, is reviewed
in Section III.B.5.6.

At the end of 1997, Credit Suisse Financial Products released CreditRisk+, an approach that is
based on actuarial science. CreditRisk+, which focuses on default alone rather than credit
migration, is examined briefly in Section III.B.5.7. CreditRisk+ makes assumptions concerning
the dynamics of default for individual bonds or loans, but ignores the causes of default, contrary
to KMV and CreditMetrics.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 3
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

III.B.5.2 What Actually Drives Credit Risk at the Portfolio Level?


Banks, as regulated institutions, are very focused on the quality of their credit portfolio. Banks
must assign regulatory capital against credit risk. The current regulation requires banks to assign
regulatory capital against each loan obligation, usually 8% of the principal amount. Future
regulation, as described in Chapter III.B.6, will allow for better differentiation among obligors
based on their ratings. However, the regulators will also look at the quality of the loan portfolio,
and the level of concentration by industry and region (‘Pillar II’ in the New Basel Accord).

But beyond the formal regulatory requirements, banks are judged and evaluated by their
shareholders, as well as by their customers, especially the depositors. Therefore, banks have
strong incentives to monitor the risk of their assets, and in particular the risk of their loan
portfolio. The profitability of most banks largely depends on the performance of the loans they
granted in the past. So what are the major factors that affect the performance of the loan
portfolio? It should be emphasised that performance has (at least) two dimensions: return and
risk. The risk of a loan portfolio can be tricky to assess since a bank can show a nice profitability
over a few years due to high interest charges and low default rates, and then, once a default event
(or events) occurs on a major exposure, the bank can incur a substantial loss, instantaneously
wiping out those profits.

The first factor affecting the portfolio is the credit standing of individual obligors. One bank may
concentrate on prime, investment-grade obligors, granting loans only to the best credits, with
very low probability of default for any obligor. Another may choose to concentrate on riskier,
speculative-grade obligors who pay a much higher coupon rate on their debt. The critical issue
for both types of institution is to charge the appropriate interest rate to each borrower that
compensates the lender for the risk it undertakes.

The second factor is ‘concentration risk’, or the extent to which the obligors are diversified across
geography and industries. A bank with corporate clients mostly in commercial real estate is
considered to be riskier than a bank with corporate loans distributed over many industries. Also,
a bank serving only a narrow geographical area can be devastated by a slowdown in the economic
activity of that particular region.

This leads to a third important factor that affects the risk of the portfolio: the state of the
economy. During good times of economic growth the frequency of defaults falls sharply
compared to periods of recession. There is a propensity for things to go wrong at the same time,
usually at the trough of the economic cycle. In addition, periods of high default rates such as
2001–2002 are characterised by low recoveries that lead to high loss rates.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 4
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

The quality of the portfolio can also be affected by the maturity of the loans. Usually, longer
loans are considered riskier than short-term loans. Time diversification can reduce the risk of the
portfolio by spreading maturities over the economic cycle, as well as reducing ‘liquidity risk’.
Liquidity risk is defined as the risk that the bank will run into difficulties when refinancing its
assets, for instance by renewing deposits or by raising money through issuing debt instruments,
because the market ‘dried up’ or prices increased sharply.

Risk assessment of the portfolio is needed to determine how much economic capital should be
allocated against unexpected credit losses. Therefore, the future distribution of the values of the
loan portfolio must be estimated. This task is not at all straightforward and is much more
complicated than estimating the value of a portfolio of market traded instruments such as stocks
and bonds. The major obstacle lies in the estimation of the correlations among potential default
events. While we have a lot of data on market traded instruments, we do not have data on non-
traded debt instruments. The data problem is also aggravated by statistical issues, for instance
that default correlations are not directly observable.

To overcome some of the estimation problems, most approaches imply default correlations from
equity correlations as in Section III.B.5.3.2. Still, the estimation problem is huge since so many
pairs of cross-correlations must be estimated for a portfolio of obligors. For example, a small
portfolio of 1000 obligors requires the estimation of 1000×999/2 = 499,500 correlations. This
last problem is circumvented by using a multi-factor or a multi-index statistical model. The rate
of return for each firm or stock is assumed to be generated by a linear combination of a few
indices. For example, the indices can be related to a country or an industry. This approach
reduces the calculation requirements to merely estimating the correlations among pairs of indices.

All the simplifying assumptions are used in order to estimate the portfolio’s credit ‘value-at-risk’
(credit VaR). The distribution of the rate of return of the portfolio of obligors is estimated, and
the credit VaR is derived from a percentile of that distribution. The credit VaR of a loan
portfolio is thus derived in a similar fashion to market risk, except that the risk horizon is usually
much longer. It is simply the distance from the mean to the percentile of the forward
distribution, at the desired confidence level. However, the future point in time is typically one
year for both regulatory and economic credit risk capital, whereas for market VaR the risk
horizon is 10 days for regulatory capital (but again, usually one year for economic capital).2

2 The choice of a risk horizon is somewhat arbitrary. It is usually one year as it corresponds to the planning cycle and
the average time it would require to recapitalise the bank if it were to suffer a major unexpected loss.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 5
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Economic capital is the financial cushion that a bank uses to absorb unexpected losses, including
those related to credit events such as credit migration and/or default (see Chapter III.B.6). Figure
III.B.5.2 illustrates how the capital charge related to credit risk can be derived from the portfolio
value distribution, using the following notation:

P(p) = value of the portfolio in the worst case scenario at the p% confidence level
FV = forward value of the portfolio = V0 (1 + PR)
V0 = current mark-to-market value of the portfolio
PR = promised return on the portfolio
EV = expected value of the portfolio = V0 (1 + ER)
ER = expected return on the portfolio
EL = expected loss = FV – EV.

Figure III.B.5.2: Credit VaR and economic capital attribution

P%

P(p) EV FV

Economic Capital EL

Because the expected loss is priced in the interest charged on loans, it is not part of required
economic capital. The capital charge is instead a function of the unexpected losses:
Economic Capital = EV – P(p)

When the risk horizon is one year, credit VaR and economic capital are equivalent.

The bank should hold reserves against these unexpected losses at a given confidence level, say
0.01%, so that there is only a 1 in 10,000 chance that the bank will incur losses above the capital
level over the period corresponding to the credit risk horizon, say, one year. The choice of a
confidence level is generally associated with some target credit rating from a rating agency such as
Moody’s or Standard and Poor’s. Most banks today are targeting a AA debt rating, which implies
a probability of default of 3–5 basis points, which then corresponds to a confidence level in the

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 6
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

range of 99.95–99.97%. This confidence level is also the expression of the ‘risk appetite’ of the
bank.

III.B.5.3 Credit Migration Framework


Credit migration is a methodology based on the estimation of the forward distribution of changes
in the value of a portfolio of loan and bond-type products at a given time horizon, usually one
year.3 The changes in value are related to the migration, upwards and downwards, of the credit
quality of the obligor, as well as to default. This approach is based on historical data of ratings of
many bonds by a rating agency, or on the internal database of a bank. Forward values and
exposures are derived from deterministic forward curves of interest rates. The only uncertainty in
CreditMetrics relates to credit migration, i.e. the process of moving up or down the credit
spectrum. Market risk is ignored in this framework as forward values and exposures are derived
from deterministic forward curves.

A typical portfolio distribution is shown Figure III.B.5.3 – it is far from being normal, contrary to
market VaR. While it may be reasonable to assume that changes in portfolio values are normally
distributed when due to market risk, credit returns are by their nature highly skewed and fat-
tailed. An improvement in credit quality brings limited ‘upside’ to an investor, while downgrades
or defaults bring with them substantial ‘downsides’. Unlike market VaR, the percentile levels of
the distribution cannot be estimated from the mean and variance only. The calculation of VaR
for credit risk thus demands a simulation of the full distribution of the changes in the value of the
portfolio.

The CreditMetrics risk measurement framework consists of two main building blocks:
x VaR due to credit for a single financial instrument; and
x VaR at the portfolio level, which accounts for portfolio diversification effects.

The first step is to specify a rating system, with rating categories, together with the probabilities
of migrating from one credit quality to another over the credit risk horizon. This transition matrix
is the key component of the credit migration approach. The matrix may take the form of the
historical migration frequencies published by an external rating agency such as Moody’s or
Standard & Poor’s, or it may be based on the proprietary rating system internal to the bank. A
strong assumption made by CreditMetrics is that all issuers within the same rating class are

3 CreditMetrics’ approach applies primarily to bonds and loans, which are both treated in the same manner. It can be easily extended
to any type of financial claims such as receivables, financial letters of credit for which we can easily derive the forward value at
the risk horizon for all credit ratings. For derivatives such as swaps or forwards the model needs to be somewhat adjusted or
‘twisted’, since there is no satisfactory way to derive the exposure, and the loss distribution, within the proposed framework
(since it assumes deterministic interest rates).

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 7
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

homogeneous credit risks: they have the same transition probabilities and the same default
probability.

Figure III.B.5.3: Comparison of the probability distributions of credit returns and market
returns

Typical credit returns


Frequency

Typical market returns

Portfolio Value

Second, the risk horizon should be specified. This is usually taken to be one year. The third step
consists of specifying the forward discount curve at the risk horizon for each credit category. In
the case of default, the value of the instrument should be estimated in terms of the ‘recovery
rate’, which is given as a percentage of face value or ‘par’. In the final step, this information is
translated into the forward distribution of the changes in the portfolio value following credit
migration.

III.B.5.3.1 Credit VaR for a Single Bond/Loan


For a given bond in the portfolio, we estimate the distribution of changes in the bond value over
a one-year period. Therefore, we have to estimate the assumed values of the bond a year from
now for all possible migration events. The most probable event is that the bond will maintain its
rating by the end of the year (e.g. a BBB bond has a probability of 86.93% of retaining its BBB
rating after a year – see table III.B.5.1) Another migration event is that the bond will be
downgraded by one notch, e.g. a BBB bond has a probability of 5.3% of being downgraded to
BB within the year. For each credit migration event we use the relevant forward zero-coupon
curve, estimated for the ‘new’ possible rating of the bond. These rates serve as discount factors,
by which the value of the future cash-flows from the bond are discounted in order to find the
value of the bond at the end of the year for the ‘new’ possible rating.

In our example the rating categories, as well as the transition matrix, are chosen from an external
rating system (such as the S&P transition matrix in Table III.B.5.1) or an internal rating system.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 8
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Table III.B.5.1: Transition matrix: probabilities of credit rating migrating from one rating
quality to another, within one year

Initial Rating at year-end (%)


Rating AAA AA A BBB BB B CCC Default
AAA 90.81 8.33 0.68 0.06 0.12 0 0 0
AA 0.70 90.65 7.79 0.64 0.06 0.14 0.02 0
A 0.09 2.27 91.05 5.52 0.74 0.26 0.01 0.06
BBB 0.02 0.33 5.95 86.93 5.30 1.17 1.12 0.18
BB 0.03 0.14 0.67 7.73 80.53 8.84 1.00 1.06
B 0 0.11 0.24 0.43 6.48 83.46 4.07 5.20
CCC 0.22 0 0.22 1.30 2.38 11.24 64.86 19.79
Source: Standard & Poor’s CreditWeek (15 April 1996)

In the case of Standard & Poor’s, there are seven rating categories. (It should be noted that the
rating agencies supply more granular statistics where each rating category is split into three sub-
categories, e.g. A+, A and A– for Standard & Poor’s rating category A) The highest category is
AAA, the lowest CCC. Default is defined as a situation in which the obligor cannot make a
payment related to a bond or a loan obligation, whether the payment is a coupon payment or the
redemption of the principal.

The bond issuer in our example currently has a BBB rating. The shaded row in Table III.B.5.1
shows the probability, as estimated by Standard & Poor’s, that this BBB issuer will migrate over a
period of one year to any one of the eight possible states, including default. Obviously, the most
probable situation is that the obligor will remain in the same rating category, BBB; this has a
probability of 86.93%. The probability of the issuer defaulting within one year is only 0.18%,
while the probability of it being upgraded to AAA is also very small, 0.02%. Such a transition
matrix is produced by the rating agencies for all initial ratings, based on the history of credit
events that have occurred to the firms rated by those agencies. Moody’s publishes similar
information.

Although ten or twenty years ago these two rating agencies concentrated on US companies, they
now cover tens of thousands of companies around the world. Transition matrices for Japan,
Europe and other regions are now becoming available. However there are still some regions
where historical default data are insufficient to estimate transition matrices. In such regions the
KMV methodology, which does not rely on transition matrices, is often the method of choice. In
the US, the transition probabilities published by the agencies are based on more than 20 years of
data across all industries. But even these data should be interpreted with care since they only

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 9
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

represent average statistics across a heterogeneous sample of firms, and over several business
cycles. For this reason many banks prefer to rely on their own statistics, which relate more closely
to the composition of their loan and bond portfolios.

The realised transition and default probabilities also vary quite substantially over the years,
depending upon whether the economy is in recession or is expanding (see Section III.B.5.4).
When implementing a model that relies on transition probabilities, one may have to adjust the
average historical values shown in Table III.B.5.1, to be consistent with one’s assessment of the
current economic environment.

A study provided by Moody’s (Carty and Lieberman, 1996) provides some idea of the variability
of default rates over time. Historical default statistics (mean and standard deviation) by rating
category for the population of obligors that they rated during the period 1970–1995 are shown in
Table III.B.5.2. Clearly the default rates become more volatile as credit quality deteriorates. Thus
one should expect the elements of the transition matrix corresponding to low grade issuers to
change considerably over time, whilst transition probabilities for high grade issuers are unlikely to
change much.

Table III.B.5.2: One-year default rates by rating, 1970–1995

One year default rate


Credit rating Average (%) Standard deviation (%)
Aaa 0.00 0.0
Aa 0.03 0.1
A 0.01 0.0
Baa 0.13 0.3
Ba 1.42 1.3
B 7.62 5.1
Source: Carty and Lieberman (1996)

Now consider the valuation of a bond. This is derived from the zero curve corresponding to the
rating of the issuer. Since there are seven possible credit qualities, seven ‘spread’ curves are
required to price the bond in all possible states (Table III.B.5.3). All obligors within the same
rating class are then marked to market using the same curve. The spot zero curve is used to
determine the current spot value of the bond. The forward price of the bond one year from the
present is derived from the forward zero curve, one year ahead, which is then applied to the
residual cash-flows from year 1 to the maturity of the bond.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 10
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Table III.B.5.3: One-year forward zero curves for each credit rating (%)

Category Year 1 Year 2 Year 3 Year 4


AAA 3.60 4.17 4.73 5.12
AA 3.65 4.22 4.78 5.17
A 3.72 4.32 4.93 5.32
BBB 4.10 4.67 5.25 5.63
BB 5.55 6.02 6.78 7.27
B 6.05 7.02 8.03 8.52
CCC 15.05 15.02 14.03 13.52
Source: CreditMetrics, J.P. Morgan

From Chapter I.B.2 we know that the one-year forward price, VBBB, of the five-year 6% coupon
bond, if the obligor remains rated BBB, is
6 6 6 106
VBBB 6    107.53
1.041 1.0467 1.0525 1.05634
2 3

where the discount rates are taken from Table III.B.5.3. The cash-flows are shown in Figure
III.B.5.4. If we replicate the same calculations for each rating category we obtain the values
shown in Table III.B.5.4. 4

Figure III.B.5.4: Cash flows for five-year 6% coupon bond

0 1 2 3 4 5 Time

6 6 6 6 106
Cash flows

(Forward price = 107.53)


VBBB

4 CreditMetrics calculates the forward value of the bonds, or loans, including compounded coupons paid out during
the year.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 11
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Table III.B.5.4: One-year forward values for a BBB bond

Year-end rating Value ($)


AAA 109.35
AA 109.17
A 108.64
BBB 107.53
BB 102.00
B 98.08
CCC 83.62
Default 51.11
Source: CreditMetrics, J.P. Morgan

We do not assume that everything is lost if the issuer defaults at the end of the year. Depending
on the seniority of the instrument, a recovery rate of par value is recuperated by the investor.
These recovery rates are estimated from historical data by the rating agencies. Table III.B.5.5
shows the expected recovery rates for bonds by different seniority classes as estimated by
Moody’s.5 In simulations performed to assess the portfolio distribution, the recovery rates are
not taken as fixed, but rather are drawn from a distribution of possible recovery rates. The
distribution of the changes in the bond value, at the one-year horizon, due to an eventual change
in credit quality is shown in Table III.B.5.6 and Figure III.B.5.5.

Table III.B.5.5: Recovery rates by seniority class (% of face value, i.e. ‘par’)

Seniority Class Mean (%) Standard Deviation (%)


Senior Secured 53.80 26.86
Senior Unsecured 51.13 25.45
Senior Subordinated 38.52 23.81
Subordinated 32.74 20.18
Junior Subordinated 17.09 10.90
Source: Carty and Lieberman (1996).

5 Cf. Carty and Lieberman (1996). See also Altman and Kishore (1996, 1998) for similar statistics.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 12
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Table III.B.5.6: Distribution of the bond values, and changes in value of a BBB bond, in
1 year

Year-end Probability of Forward Change in


rating state: price: V ($) value: 'V
p(%) ($)
AAA 0.02 109.35 1.82
AA 0.33 109.17 1.64
A 5.95 108.64 1.11
BBB 86.93 107.53 0
BB 5.30 102.00 –5.53
B 1.17 98.08 –9.45
CCC 0.12 83.62 –23.91
Default 0.18 51.11 –56.42
Source: CreditMetrics, J.P. Morgan.

Figure III.B.5.5: Histogram of the one-year forward prices and changes in value of a BBB
bond

Frequency

86.93

5.95
5.30

Probability
of State
(%)

1.17

.33
.18
.12
.02
Default CCC B BB BBB A AA AAA

51.11 83.62 98.08 102.0 107.53 .... 109.35 Forward Price: V

-56.42 -23.91 -9.45 -5.53 0 .... 1.82 Change in value: V '

This distribution exhibits a long ‘downside tail.’ The first percentile of the distribution of 'V,
which corresponds to credit VaR at the 99% confidence level, is –23.91. It is a much lower value

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 13
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

than if we computed the first percentile assuming a normal distribution for 'V. In that case
credit VaR at the 99% confidence level would be only –7.43.6

The above analysis is the basis for the evaluation of the portfolio of loans, along the lines
described above. CreditMetrics proposes to use Monte Carlo simulations to assess the credit risk
of a bond/loan portfolio due to the large amount of factors that must be taken into
consideration.

III.B.5.3.2 Estimation of Default and Rating Changes Correlations


So far we have shown how the future distribution of values for a given bond (or loan) is derived.
In what follows we focus on how to estimate the potential changes in the value of a portfolio of
creditors, when the changes are due to credit risk only, and credit risk is expressed as the
potential ratings changes during the year. One important factor in the portfolio assessment is the
correlation between changes in the credit ratings and the default correlation for any two obligors.
In reality, the correlations between the changes in credit quality are not zero, and the overall
credit VaR is quite sensitive to these correlations. Their accurate estimation is therefore one of
the key determinants of portfolio optimisation.

Default correlations might be expected to be higher for firms within the same industry, or in the
same region, than for firms in unrelated sectors. In addition, correlations vary with the relative
state of the economy in the business cycle. If there is a slowdown in the economy, or a recession,
most of the assets of the obligors will decline in value and quality, and the likelihood of multiple
defaults increases substantially. The opposite happens when the economy is performing well:
default correlations go down. Thus, we cannot expect default and migration probabilities to
remain stationary over time. There is clearly a need for a structural model that relates changes in
default probabilities to fundamental variables. CreditMetrics derives the default and migration
probabilities from a correlation model of the firm’s assets.

CreditMetrics makes use of the stock price of a firm as a proxy for its asset value, as the true
asset value is not directly observable. (This is another simplifying assumption in CreditMetrics

6 The mean, µ, and the variance, Ƴ2 , of the distribution for 'V can be calculated from the data in Table III.B.5.5 as
follows:
µ mean( ƅV ) ¦ p ƅV
i
i i

0.02% u 1.82  0.33% u 1.64  ...  0.18% u ( 56.42 )


0.46
Ƴ2 variance( ƅV ) ¦ p ( ƅV
i
i i  µ)2

0.02%(1.82  0.46 ) 2  0.33%(1.64  0.46 ) 2  ...  0.18%( 56.42  0.46 ) 2 8.95


i.e. Ƴ = 2.99. The 0.01 percentile of a normal distribution N(µ, Ƴ 2) is µ – 2.33Ƴ, i.e. –7.43.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 14
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

that may affect the accuracy of the approach.) CreditMetrics estimates the correlations between
the equity returns of various obligors, then it infers the correlations between changes in credit
quality directly from the joint distribution of these equity returns.

The theoretical framework underlying all this is the option pricing approach to the valuation of
corporate securities first developed by Merton (1974). The model is described in detailed in
Section III.B.5.5 as it forms the basis for the KMV approach. In Merton’s model, the firm is
assumed to have a very simple capital structure; it is financed by equity, St, and a single zero-
coupon debt instrument maturing at time T, with face value F, and current market value Bt. The
firm’s balance sheet is represented in Table III.B.5.7, where Vt is the value of all the assets and
Vt Bt ( F )  S t .

Table III.B.5.7: Balance sheet of Merton’s firm

Assets Liabilities / Equity


Risky Assets: Vt Debt: Bt(F)
Equity: St

Total: Vt Vt

Figure III.B.5.6: Distribution of the firm’s assets value at maturity of the debt obligation

Probability of default

F VT

Default point

In this framework, default occurs at the maturity of the debt obligation only when the value of
assets is less than the payment, F, promised to the bondholders. Figure III.B.5.6 shows the
distribution of the assets’ value at time T, the maturity of the zero-coupon debt, and the
probability of default (i.e. the shaded area on the left-hand side of the default point, F).

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 15
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Merton’s model is extended by CreditMetrics to include changes in credit quality as illustrated in


Figure III.B.5.7. This generalisation consists of slicing the distribution of asset returns into bands
in such a way that, if we draw randomly from this distribution, we reproduce exactly the
migration frequencies as shown in the transition matrices that we discussed earlier.

Figure III.B.5.7 shows the distribution of the normalised assets’ rates of return, one year ahead.
The distribution is normal with mean zero and unit variance. The credit rating ‘thresholds’ are
calculated using the transition probabilities in Table III.B.5.1 for a BB-rated obligor. The area in
the right-hand tail of the distribution, down to ZAAA, corresponds to the probability that the
obligor will be upgraded from BB to AAA, i.e. 0.03%. Then, the area between ZAA and ZAAA
corresponds to the probability of being upgraded from BB to AA, etc. The area in the left-hand
tail of the distribution, to the left of ZCCC, corresponds to the probability of default, i.e. 1.06%.

Figure III.B.5.7: Generalisation of the Merton model to include rating changes

Standard normal distribution for a BB-rated firm

Rating: Default CCC B Firm remains BB AA AAA


BBB A
Prob (%): 1.06 1.00 8.84 80.53 7.73 0.67 0.14 0.03
Z-threshold (s) Z Z Z Z Z Z
C C C B B B B B B A A A ZA A A

-2.30 -2.04 -1.23 1.37 2.39 2.93 3.43

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 16
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Table III.B.5.8: Transition probabilities and credit quality thresholds for BB- and A-rated
obligors

A-rated obligor BB-rated obligor

Rating in one year Probabilities Thresholds: Z Probabilities Thresholds: Z


(%) (Ƴ) (%) (Ƴ)
AAA 0.09 3.12 0.03 3.43
AA 2.27 1.98 0.14 2.93
A 91.05 –1.51 0.67 2.39
BBB 5.52 –2.30 7.73 1.37
BB 0.74 –2.72 80.53 –1.23
B 0.26 –3.19 8.84 –2.04
CCC 0.01 –3.24 1.00 –2.30
Default 0.06 1.06

Table III.B.5.8 shows the transition probabilities for two obligors rated BB and A respectively,
and the corresponding credit quality thresholds. The thresholds are given in terms of normalised
standard deviations. For example, for a BB-rated obligor the default threshold is –2.30 standard
deviations from the mean rate of return.

This generalisation of Merton’s model is quite easy to implement. It assumes that the normalised
log-returns over any period of time are normally distributed with a mean of 0 and a variance of 1,
and the distribution is the same for all obligors within the same rating category. If pDef denotes

the probability of the BB-rated obligor defaulting, then the critical asset value VDef is such that:

p Def Pr(Vt d VDef )

which can be translated into a normalised threshold ZCCC, such that the area in the left-hand tail
below ZCCC is pDef..7 ZCCC is simply the threshold point in the standard normal distribution, N(0,1),
corresponding to a cumulative probability of pDef. Then, based on the option pricing model, the
critical asset value VDef which triggers default is such that ZCCC = d2. This critical asset value

VDef is also called the default point.8

7 See the Appendix for the derivation of the proof. In the next section we define the ‘distance to default’ as the
distance between the expected asset value and the default point.
8 Note that d is different from its equivalent in the Black–Scholes formula since, here, we work with the ‘actual’
2
instead of the ‘risk-neutral’ return distributions, so that the drift term in d2 is the expected return on the firm’s assets,
instead of the risk-free interest rate as in Black–Scholes. See Chapter I.A.8 for the definition of d2 in Black–Scholes and
Appendix 1 for d2 in the above derivation.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 17
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Note that only the threshold levels are necessary to derive the joint migration probabilities, and
these can be calculated without it being necessary to observe the asset value, and to estimate its
mean and variance. To derive the critical asset value VDef we only need to estimate the expected

asset return µ and asset volatility Ƴ. Accordingly ZB is the threshold point corresponding to a
cumulative probability of being either in default or in rating CCC, i.e. pDef +pCCC, etc.

We mentioned above that, as asset returns are not directly observable, CreditMetrics makes use
of equity returns as their proxy. Yet using equity returns in this way is equivalent to assuming that
all the firm’s activities are financed by means of equity. This is a major drawback of the approach,
especially when it is being applied to highly leveraged companies. For those companies, equity
returns are substantially more volatile, and possibly less stationary, than the volatility of the firm’s
assets.

Now, assume that the correlation between the assets’ rates of return is known, and is denoted by
Ʊ, which is assumed to be equal to 0.2 in our example. The normalised log-returns on both assets
follow a joint normal distribution:

­ 1 ½
f rBB , r A ; Ʊ
1
exp® >
rBB2  2ƱrBBr A  r A2 ¾ @
¯ 2(1  Ʊ )
2
2ư 1  Ʊ 2
¿

We can therefore compute the probability of both obligors being in any particular combination
of ratings. For example, we can compute the probability that they will remain in the same rating
classes, i.e. BB and A, respectively:
Pr  1.23  rBB  1.37, 1.51  r A  1.98 0.7365

where rBB and rA are the rates of return on the assets of obligors BB and A, respectively, assumed
normally distributed as in Figure III.B.5.7.9

For any two obligors the joint probability of both obligors defaulting is
p1,2 Pr[V1 d VDef 1 ,V2 d VDef 2 ]

where V1 and V2 and denote the asset values for both obligors at time t, and VDef1 and VDef2 are
the corresponding default points. This joint probability may be calculated in exactly the same way
as the migration probabilities were calculated above, i.e. using the bivariate normal distribution.

9 See Chapter II.E for details on how to compute joint probabilities when the two random variables have a bivariate
normal distribution.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 18
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Given this joint probability of default, and the individual probabilities of default for each obligor,
p1 and p2, the default correlation can be calculated as: 10
p1, 2  p1 p2
corr( Def 1, Def 2 ) (III.B.5.1)
p1(1  p1 ) p2 (1  p2 )

We can illustrate the results with a numerical example. If the probabilities of default for obligors
rated A and BB are PDef(A) = 0.0006 and PDef(BB) = 0.0106, respectively, and the correlation
coefficient between the rates of return on the two assets is Ʊ= 0.2, then the joint probability of
default is only 0.000054.11 Now, using equation (III.B.5.1), we find that the correlation
coefficient between the two default events is only 0.019.

This example, with asset return correlation of 0.2 but default correlation of only 0.019, is not
unusual. Asset returns correlations are approximately 10 times larger than default correlations for
asset correlations in the range of 0.2–0.6%. This shows that the joint probability of default is in
fact quite sensitive to pairwise asset return correlations, and it illustrates how important it is to
estimate these data correctly if one is to assess the diversification effect within a portfolio. It can
be shown that the impact of correlations on credit VaR is quite large. It is larger for portfolios
with relatively low-grade credit quality than it is for high-grade portfolios. Indeed, as the credit
quality of the portfolio deteriorates and the expected number of defaults increases, this number is
magnified by an increase in default correlations.

III.B.5.3.3 Credit VaR of a Bond/Loan Portfolio


The analytic approach that we sketched out above for a portfolio with bonds issued by two
obligors is not practicable for large portfolios. Instead, CreditMetrics implements a Monte Carlo
simulation to generate the full distribution of the portfolio values at the credit horizon of one
year. The following steps are necessary:

1. Derive the asset return thresholds for each rating category.


2. Estimate the correlation between each pair of obligors’ asset returns.
3. Generate return scenarios according to their joint normal distribution. A standard
technique that is often used to generate correlated normal variables is the Cholesky
decomposition.12 Each scenario is characterised by n standardised asset returns, one for
each of the n obligors in the portfolio.

10 See Lucas (1995).

11 If the default events were independent the joint probability of default would simply be the product of
the two default probabilities, i.e. 0.0006 × 0.0106 = 0.000064.
12 A good reference on Monte Carlo simulations and the Cholesky decomposition is Fishman (1997, p. 223)

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 19
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

4. For each scenario, and for each obligor, map the standardised asset return into the
corresponding rating, according to the threshold levels derived in step 1.
5. Given the spread curves, which apply for each rating, revalue the portfolio.
6. Repeat the procedure a large number of times, say 100,000, and plot the distribution of
the portfolio values to obtain a graph such as Figure III.B.5.1.
7. Finally, derive the percentiles of the distribution of the future values of the portfolio to
obtain the credit VaR and/or credit economic capital as in Figure II.B.5.2.

Estimating VaR for credit requires a very large number of simulations as the loss distribution is
very skewed with very few observations in the tail. In order to reduce substantially the number of
simulations, say by a factor of 5–10, while maintaining the same level of accuracy, it is
recommended for practical applications to implement credit portfolio models with the use of
variance reduction techniques. ‘Importance sampling’ is a technique which produces remarkable
results for credit risk (Glasserman et al., 2000).

III.B.5.4 Conditional Transition Probabilities– CreditPortfolioView


CreditPortfolioView is a multi-factor model that is used to simulate the joint conditional
distribution of default and migration probabilities for various rating groups in different industries,
and for each country, conditional on the value of macro-economic factors. CreditPortfolioView
is based on the observation that default probabilities and credit migration probabilities are linked
to the economy. When the economy worsens both downgrades and defaults increase; when the
economy becomes stronger, the contrary holds true. In other words, credit cycles follow
business cycles closely.

Since the shape of the economy is, to a large extent, driven by macro-economic factors,
CreditPortfolioView proposes a methodology to link those macro-economic factors to default
and migration probabilities. It employs the values of macro-economic factors such as the
unemployment rate, the rate of growth in GDP, the level of long-term interest rates, foreign
exchange rates, government expenditures and the aggregate savings rate.

Provided that data are available, this methodology can be applied in each country to various
sectors and various classes of obligors that react differently during the business cycle – sectors
such as construction, financial institutions, agriculture, and services. It applies better to
speculative-grade obligors whose default probabilities vary substantially with the credit cycle, than
to investment-grade obligors whose default probabilities are more stable.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 20
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Conditional default probabilities are modelled as a logit function, whereby the independent variable
is a country-specific index that depends upon current and lagged macro-economic variables. That
is:13
1
P j ,t  Y j ,t
1 e

where Pj,t is the conditional probability of default in period t, for speculative-grade obligors in
country/industry j, and Yj,t is the country index value derived from a multi-factor model .14

In order to derive the conditional transition matrix the (unconditional) transition matrix based on
Moody’s or Standard & Poor’s historical data will be used. These transition probabilities are
unconditional in the sense that they are historical averages based on more than 20 years of data
covering several business cycles, across many different countries and industries. As we discussed
earlier, default probabilities for non-investment grade obligors are higher than average during a
period of recession. Also credit downgrades increase, while upward migrations decrease. The
opposite holds during a period of economic expansion. We can express this in the following way:
P j ,t
! 1 in economic recession
ƶP j , t
(III.B.5.2)
P j ,t
 1 in economic expansion
ƶP j , t

where ƶPj,t is the unconditional (historical average) probability of default in period t, for
speculative-grade obligors in country/industry j. CreditPortfolioView proposes to use (III.B.5.2)
to adjust the unconditional transition probabilities in order to produce a transition matrix Mt that
is conditional on the state of the economy:
Mt =M(Pj,t/ƶPj,t)

where the adjustment consists in shifting the probability mass toward downgraded and defaulted
states when the ratio Pj,t/ƶPj,t is greater than one, and in the opposite direction if the ratio is less
than one. Since one can simulate Pj,t over any time horizon t = 1, …, T, this approach can
generate multi-period transition matrices:

MT – M P j ,t / ƶP j ,t . (III.B.5.3)
t 1,...,T

13 Note that the logit function ensures that the probability takes a value between 0 and 1.
14 J.P. Morgan (1997) provides an example of multi-factor model in the context of credit risk modelling. For a review
of multifactor models, see Elton and Gruber (1995) and Rudd and Clasing (1988).

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 21
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

One can simulate the transition matrix (III.B.5.3) many times to generate a distribution of the
cumulative conditional default probability such as that shown in Figure III.B.5.8 for any rating
over any time period. The same Monte Carlo methodology can be used to produce the
conditional cumulative distributions of migration probabilities over any time horizon.

Figure III.B.5.8: Distribution of the cumulative conditional default probability, for a


given rating, over a given time horizon, T

Frequency expected (average) default probability


(%)

99th percentile

0 1

CreditPortfolioView and KMV (described in Section III.B.5.6) base their approach on the
empirical observation that default and migration probabilities vary over time. KMV adopts a
micro-economic approach that relates the probability of default of any obligor to the market
value of its assets. CreditPortfolioView proposes a methodology that links macro-economic
factors to default and migration probabilities. The calibration of CreditPortfolioView thus
requires reliable default data for each country, and possibly for each industry sector within each
country.

Another limitation of the model is the ad hoc adjustment of the transition matrix. It is not clear
that the proposed methodology performs better than a simple Bayesian model, where the
revision of the transition probabilities would be based on the internal expertise accumulated by
the credit department of the bank, and an internal appreciation of the current stage of the credit
cycle (given the quality of the bank’s credit portfolio). These two approaches are somewhat
related since the market value of the firms’ assets depends on the shape of the economy; it would
be interesting to compare the transition matrices produced by both models.

III.B.5.5 The Contingent Claim Approach to Measuring Credit


Risk
The CreditMetrics approach to measuring credit risk, as described previously, is rather appealing
as a methodology. Unfortunately it has a major weakness: reliance on ratings transition

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 22
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

probabilities that are based on average historical frequencies of defaults and credit migration. As
a result, the accuracy of CreditMetrics calculations depends upon two critical assumptions: first,
that all firms within the same rating class have the same default rate and the same spread curve,
even when recovery rates differ among obligors; and second, that the actual default rate is equal
to the historical average default rate. Credit rating changes and credit quality changes are taken to
be identical. Credit rating and default rates are also synonymous, i.e. the rating changes when the
default rate is adjusted, and vice versa. This view has been strongly challenged by researchers
working for the consulting and software corporation KMV.15 Indeed, the assumption cannot be
true since we know that default rates evolve continuously, while ratings are adjusted in a discrete
fashion. (This lag is because rating agencies necessarily take time to upgrade or downgrade
companies whose default risk has changed.)

What we call the ‘structural’ approach offers an alternative to the credit migration approach.
Here, the economic value of default is presented as a put option on the value of the firm’s assets.
The merit of this approach is that each firm can be analysed individually based on its unique
features. But this is also the principal drawback, since the information required for such an
analysis is rarely available to the bank or the investor. One has to estimate the total value, and the
risk (e.g. the volatility) of the firm’s assets. The option pricing approach, introduced by Merton
(1974) in a seminal paper, builds on the limited liability rule which allows shareholders to default
on their obligations while they surrender the firm’s assets to the various stakeholders, according
to pre-specified priority rules. The firm’s liabilities are thus viewed as contingent claims issued
against the firm’s assets, with the payoffs to the various debt-holders completely specified by
seniority and safety covenants. Default occurs at debt maturity whenever the firm’s asset value
falls short of debt value at that time. In this model, the loss rate is endogenously determined and
depends on the firm’s asset value, volatility, and the default-free interest rate for the debt
maturity.

III.B.5.5.1 Structural Model of Default Risk: Merton’s (1974) Model


To determine the value of the credit risk arising from a bank loan, we first make two
assumptions: that the loan is the only debt instrument of the firm, and that the only other source
of financing is equity. In this case, as we shall see below, the credit value is equal to the value of a
put option on the value of assets of the firm, at a strike price equal to the face value of debt
(including accrued interest), maturing at the maturity of the debt. By purchasing the put on the
assets of the firm for the term of the debt, with a strike price equal to the face value of the loan,
the bank can completely eliminate all the credit risk and convert the risky corporate loan into a

15 KMV is a trademark of KMV Corporation. The initials KMV stand for the surnames of Stephen Kealhofer, John
McQuown and Oldrich Vasicek who founded KMV Corporation in 1989. Kealhofer and Vasicek are former
academics from the University of California at Berkeley.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 23
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

riskless loan. Thus, the cost of eliminating the credit risk associated with providing a loan to the
firm is the value of this put option. Now, if we make the assumptions that are needed to apply
the Black–Scholes (BS) model (Black and Scholes, 1973) to equity and debt instruments, we can
express the value of the credit risk in an option-like formula.

Consider a firm with risky assets V, which is financed by equity, S, and by one debt obligation,
maturing at time T with face value (including accrued interest) of F and market value B. If we
assume that markets are frictionless, with no taxes, and there is no bankruptcy cost, then the
value of the firm’s assets is simply the sum of the firm’s equity and debt. At time t = 0 then,
V0 S 0  B0 . (III.B.5.4)

The loan to the firm is subject to credit risk, namely the risk that at time T the value of the firm’s
assets VT , will be below the obligation to the debt holders, F. Credit risk exists as long as the
probability of default, Pr(VT < F), is greater than zero.

From the viewpoint of a bank that makes a loan to the firm, this gives rise to a series of
questions. Can the bank eliminate/reduce credit risk, and at what price? What is the economic
cost of reducing credit risk? And, what are the factors affecting this cost? In this simple
framework, credit risk is a function of the financial structure of the firm, i.e.

x its leverage ratio L { Fe-rT/V0, where V0 is the present value of the firm’s assets, and Fe  rT is
the present value of the debt obligation at maturity,
x the volatility Ƴ of the firm’s assets, and
x the time T to maturity of the debt.

The model was initially suggested by Merton (1974) and further analysed by Galai and Masulis
(1976). To understand why the credit value is equal to the value of a put option on the value of
assets of the firm, at a strike price equal to the face value of debt (including accrued interest), and
with maturity equal to the maturing of the debt, consider Table III.B.5.9. This shows that if the
bank buys the put option with value P, the value at time T will be F whether VT d F or VT > F,
so credit risk is eliminated. In this way they can convert the risky corporate loan into a riskless
loan with a face value of F.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 24
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Table III.B.5.9: Bank’s payoff at times 0 and T for making a loan and buying a put option

Time 0 T
Value of assets V0 VT d F VT > F
Bank’s position:
(a) make a loan –B0 VT F
(b) buy a put –P0 F – VT 0
Total –B0 – P0 F F

Thus the value of the put option is the cost of eliminating the credit risk associated with
providing a loan to the firm. If we make the assumptions that are needed to apply the (BS) model
to equity and debt instruments (see Galai and Masulis, 1976, for a detailed discussion of the
assumptions), we can write the value of the put as:

P0  N ( d 1 )V0  Fe rt N ( d 2 ) , (III.B.5.5)

where P0 is the current value of the put, N(.) is the cumulative standard normal distribution,

ln(V0 / F )  ( r  Ƴ 2 / 2)T ln(V0 / Fe rT )  Ƴ 2T / 2


d1 , d2 d1  Ƴ T , (III.B.5.6)
Ƴ T Ƴ T

and Ƴ is the standard deviation of the rate of return of the firm’s assets. The model illustrates that
the credit risk, and its costs, is an increasing function of the volatility of the assets of the firm Ƴ
and the time interval T until debt is paid back, and a decreasing function of the risk-free interest
rate r (the higher is r, the less costly it is to reduce credit risk). The cost of credit risk is also
homogeneous function of the leverage ratio, which means that it stays constant for a scale

expansion of Fe  rT /V0 .

Note that, when the probability of default is greater than zero the yield to maturity on the debt yT
must be greater than the risk-free rate r, so that the default spread ưT = yT – r that compensates the
bond holders for the default risk that they bear is positive. The default spread can be regarded as
a risk premium associated with holding risky bonds. It can be shown that, in the Merton (1974)
framework, the default spread can be computed exactly as a function of the leverage ratio, the
volatility of the underlying assets and the debt maturity. In fact:
1 § V ·
ưT = yT  r =  ln¨ N ( d 2 )  0rT N ( d 1 ) ¸ .
T © Fe ¹

Note that the default spread decreases when the risk-free rate increases. The greater the risk-free
rate, the less risky is the bond and the lower is the value of the put protection – therefore, the
lower is the risk premium. The numerical examples in Table III.B.5.10 show the default spread
for various levels of volatility and different leverage ratios.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 25
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Table III.B.5.10: Default spread for corporate debt

(for V0 = 100, T = 1, and r = 10%16)


Volatility of underlying asset: Ƴ
Leverage ratio: L 0.05 0.10 0.20 0.40
0.5 0 0 0 1.0%
0.6 0 0 0.1% 2.5%
0.7 0 0 0.4% 5.6%
0.8 0 0.1% 1.5% 8.4%
0.9 0.1% 0.8% 4.1% 12.5%
1.0 2.1% 3.1% 8.3% 17.3%

Example III.B.5.1

We show how the 5.6% default spread (marked in red) was obtained in Table III.B.5.10. Using
equations (III.B.5.5) and (III.B.5.6) with V0 = 100, T = 1, r = 0.1 (i.e. 10%), Ƴ = 0.4 (i.e. 40%)
with the leverage ratio L = 70%, we obtain S0 = 33.37 for the value of equity and B0 = 66.63 for

the value of the corporate risky debt. Since L = Fe  rT /V0 , we have F = 77. Therefore the yield
on the loan is 77/66.63) – 1 = 0.156 and there is a 5.6% risk premium to reflect the credit risk.

The model also shows that the put value is P0 = 3.37. Hence the cost of eliminating the credit risk
is $3.37 for $100 worth of the firm’s assets, where the face value (i.e. the principal amount plus
the promised interest rate) of the one-year debt is 77. This cost drops to 25 cents when volatility
decreases to 20% and to 0 for 10% volatility. The assets’ volatility is clearly a critical factor in
determining credit risk. To demonstrate that the bank eliminates all its credit risk by buying the
put, we can compute the yield on the bank’s position as
F /( B0 P ) 77 /( 66.63  3.37 ) 1.10 ,

which translates to a riskless yield of 10% per annum.

III.B.5.5.2 Estimating Credit Risk as a Function of Equity Value


We have already shown that the cost of eliminating credit risk can be derived from the value of
the firm’s assets. A practical problem arises over how easy it is to observe V. In some cases, if
both equity and debt are traded, V can be reconstructed by adding the market values of both
equity and debt. However, corporate loans are not often traded and so, to all intents and

1610% is the annualised interest rate discreetly compounded, which is equivalent to 9.5% continuously
compounded.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 26
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

purposes, we can only observe equity. The question, then, is whether the risk of default can be
hedged by trading shares and derivatives on the firm’s stock.

In the Merton framework, equity itself is a contingent claim on the firm’s assets. Its value can be
expressed as a function of the same parameters as the put option:

S VN ( d 1 )  Fe rT N ( d 2 ) (III.B.5.7)

A put can be created synthetically by selling short N(d1) units of the firm’s assets, and buying F
N(d2) units of government bonds maturing at T, with face value of F. If one sells short N(d1)/
N(d1) units of the stock S, one effectively creates a short position in the firm’s assets of N(d1)
units, since:
 N ( d 1 ) N ( d 1 )
S VN ( d 1 )  Fe rT N ( d 2 ) .
N (d1 ) N (d1 )

Therefore, if V is not directly traded or observed, one can create a put option dynamically by
selling short the appropriate number of shares. The equivalence between the put and the
synthetic put is valid over short time intervals, and must be readjusted frequently with changes in
S and in time left to debt maturity.

Example III.B.5.2

Using the data from Example III.B.5.1, N(d1)/ N(d1) = 0.137/0.863 = 0.159. This means that
in order to insure against the default of a one-year loan with a maturity value of 77, for a firm
with a current market value of assets of 100, the bank should sell short 0.159 of the outstanding
equity. Note that the outstanding equity is equivalent to a short-term holding of N(d1) = 0.863 of
the firm’s assets. Shorting 0.159 of equity is equivalent to shorting 0.863 of the firm’s assets.

The question now is whether we can use a put option on equity in order to hedge the default risk.
It should be remembered that equity itself reflects the default risk, and as a contingent claim its
instantaneous volatility ƳS can be expressed as:
ƳS = ƧS,VƳ (III.B.5.8)

where ƧS,V = N(d1)V/S is the instantaneous elasticity of equity with respect to the firm’s value,
and ƧS,V t 1. Since ƳS is stochastic and changes with V, the conventional BS model cannot be
applied to the valuation of puts and call on S. The BS model requires Ƴ to be constant, or to
follow a deterministic path over the life of the option. However in practice, for long-term
options, the estimated ƳS from (III.B.5.8) is not expected to change widely from day to day.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 27
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Therefore, equation (III.B.5.8) can be used in the context of BS estimation of long-term options,
even when the underlying instrument does not follow a stationary lognormal distribution.

III.B.5.6 The KMV Approach


KMV derives the expected default frequency (EDF), i.e. the default probability, for each obligor based
on the Merton (1974) type of model. The probability of default is thus a function of the firm’s
capital structure, the volatility of the asset returns and the current asset value. The EDF is firm-
specific, and can be mapped onto any rating system to derive the equivalent rating of the obligor.
EDFs can be viewed as a ‘cardinal ranking’ of obligors relative to default risk, instead of the more
conventional ‘ordinal ranking’ proposed by rating agencies (which relies on letters such as AAA,
AA, …).

Contrary to CreditMetrics, KMV’s model does not make any explicit reference to the transition
probabilities which, in KMV’s methodology, are already embedded in the EDFs. Indeed, each
value of the EDF is associated with a spread curve and an implied credit rating.

Credit risk in the KMV approach is essentially driven by the dynamics of the asset value of the
issuer. Given the capital structure of the firm,17 and once the stochastic process for the asset
value has been specified, the actual probability of default for any time horizon, one year, two years,
etc., can be derived. Figure III.B.5.9 depicts how the probability of default relates to the
distribution of asset returns and the capital structure of the firm.

We assume that the firm has a very simple capital structure. It is financed by means of equity, St,
and a single zero-coupon debt instrument maturing at time T, with face value F, and current
market value Bt. The firm’s balance sheet can be represented as follows: Vt Bt ( F )  S t , where
Vt is the value of all the assets. The value of the firm’s assets, Vt, is assumed to follow a standard
geometric Brownian motion. In this framework, default only occurs at maturity of the debt
obligation, when the value of assets is less than the promised payment F to the bondholders.
Figure III.B.5.9 shows the distribution of the assets’ value at time T, the maturity of the zero-
coupon debt, and the probability of default which is the shaded area below F.

17 That is, the composition of its liabilities: equity, short- and long-term debt, convertible bonds, etc.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 28
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Figure III.B.5.9: Distribution of the firm’s assets value at maturity of the debt obligation

Assets Value
VT = V0 exp {(PV T + V—TZT}

E(VT) = V0 exp ( PT)

VT

V0

Probability of default

T Time

The KMV approach is best applied to publicly traded companies, where the value of the equity is
determined by the stock market. The information contained in the firm’s stock price and balance
sheet can then be translated into an implied risk of default, as shown in the next section. The
derivation of the actual probabilities of default proceeds in three stages:
x estimation of the market value and volatility of the firm’s assets;
x calculation of the distance to default, which is an index measure of default risk; and
x scaling of the distance to default to actual probabilities of default using a default database.

III.B.5.6.1 Estimation of the Asset Value VA and the Volatility of Asset Return ƳA
In the contingent claim approach to the pricing of corporate securities, the market value of the
firm’s assets is assumed to be lognormally distributed, i.e. the log-asset return follows a normal
distribution.18 This assumption is quite robust and, according to KMV’s own empirical studies,
actual data conform quite well to this hypothesis.19 In addition, the distribution of asset returns is
stable over time, i.e. the volatility of asset returns remains relatively constant.

As we discussed earlier, if all the liabilities of the firm were traded, and marked to market every
day, then the task of assessing the market value of the firm’s assets and its volatility would be

18 Financial models consider essentially market values of assets, and not accounting values, or book values, which only
represent the historical cost of the physical assets, net of their depreciation. Only the market value is a good measure of
the value of the firm’s ongoing business and it changes as market participants revise the firm’s future prospects. KMV
models the market value of assets. In fact, there might be huge differences between both the market and the book
values of total assets. For example, as of February 1998 KMV has estimated the market value of Microsoft assets at
US$228.6 billion versus $16.8 billion for their book value, while for Trump Hotel and Casino the book value, which
amounts to $,2.5 billion, is higher than the market value of $ 1.8 billion.
19 The exception is when the firm’s portfolio of businesses has changed substantially through mergers and acquisitions,
or restructuring.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 29
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

straightforward. The firm’s asset value would be simply the sum of the market values of the
firm’s liabilities, and the volatility of the asset return could be simply derived from the historical
time series of the reconstituted asset value. In practice, however, only the price of equity for most
public firms is directly observable, and in some cases part of the debt is actively traded.

The alternative approach to assets valuation consists of applying the option-pricing model to the
valuation of corporate liabilities as suggested in Merton (1974). In order to make their model
tractable KMV assume that the capital structure of a corporation is composed solely of equity,
short-term debt (considered equivalent to cash), long-term debt (in perpetuity), and convertible
preferred shares.20 Given these simplifying assumptions, it is possible to derive analytical
solutions for the value of equity, S, and its volatility, ƳS:
S = f (V, Ƴ, L, c, r), (III.B.5.9)
ƳS = g (V, Ƴ, L, c, r), (III.B.5.10)

where L denotes the leverage ratio in the capital structure, c is the average coupon paid on the
long-term debt and r is the risk-free interest rate.

If ƳS were directly observable, like the stock price, we could simultaneously solve (III.B.5.9) and
(III.B.5.10) for V and Ƴ. But the instantaneous equity volatility, ƳS, is relatively unstable, and is in
fact quite sensitive to the change in asset value; there is no simple way to measure ƳS precisely
from market data. Since only the value of equity S is directly observable, we can back out V from
(III.B.5.9) so that it becomes a function of the observed equity value, or stock price, and the
volatility of asset returns:
V = h (S, Ƴ, L, c, r ) (III.B.5.11)

Here volatility is an implicit function of V, S, L, c and r. So to calibrate the model for Ƴ, KMV uses
an iterative technique.

III.B.5.6.2 Calculation of the ‘Distance to Default’


Using a sample of several hundred companies, KMV observed that firms default when the asset
value reaches a level that is somewhere between the value of total liabilities and the value of
short-term debt. Therefore, the tail of the distribution of asset values below total debt value may
not be an accurate measure of the actual probability of default. Loss of accuracy may also result
from factors such as the non-normality of the asset return distribution, and the simplifying
assumptions made about the capital structure of the firm. This may be further aggravated if a

20 In the general case the resolution of this model may require the implementation of complex numerical techniques,
with no analytical solution, due to the complexity of the boundary conditions attached to the various liabilities. See, for
example, Vasicek (1997).

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 30
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

company is able to draw on (otherwise unobservable) lines of credit. If the company is in distress,
using these lines may (unexpectedly) increase its liabilities while providing the necessary cash to
honour promised payments.

For all these reasons, KMV implements an intermediate phase before computing the probabilities
of default. As shown in Figure III.B.5.10, which is similar to Figure III.B.5.9, KMV computes an
index called distance to default (DD). This is the number of standard deviations between the mean
of the distribution of the asset value, and a critical threshold called the ‘default point’ (DPT)
which is set at the par value of current liabilities including short-term debt to be serviced over the
time horizon (STD), plus half the long-term debt (LTD), i.e. STD + LTD/2. If the expected asset
value in one year is E(V1) and Ƴ is the standard deviation of future asset returns then
E(V1 )  DPT .
DD
Ƴ

Figure III.B.5.10: Distance to default

Asset Value
Asset Value Distribution
E (VT ) VO e PT
VT

V0

Probability of default

T Time

Given the lognormality assumption of asset values, the distance to default expressed in unit of
asset return standard deviation at time horizon T, is

ln V0 / DPTT  ( µ  1 2 Ƴ 2 )T
DD (III.B.5.12)
Ƴ T
where
V0 = current market value of assets
DPTT = default point at time horizon T
µ = expected return on assets, net of cash outflows

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 31
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Ƴ = annualised asset volatility.

It follows that the shaded area shown below the default point in Figure III.B.5.10 is equal to N(–
DD).

III.B.5.6.3 Derivation of the Probabilities of Default from the Distance to Default


This last phase consists of mapping the distance to default to the actual probabilities of default,
for a given time horizon. KMV calls these probabilities expected default frequencies. Using historical
information about a large sample of firms, including firms that have defaulted, one can estimate,
for each time horizon, the proportion of firms of a given ranking, say DD = 4, that actually
defaulted after one year. This proportion, say 40 bp, or 0.4%, is the EDF as shown in Figure
III.B.5.11.

Figure III.B.5.11: Mapping of the ‘distance to default’ into the EDFs, for a given time
horizon

EDF

40 bp

1 2 3 4 5 6
DD

Example III.B.5.3:

Current market value of assets: V0 = 1000


Net expected growth of assets per annum: 20%
Expected asset value in one year: V0 × 1.20 = 1200
Annualised asset volatility, Ƴ: 100
Default point: 800

Then DD = (1200 – 800)/100 = 4. Assume that among the population of 5000 firms with a DD
of 4 at one point in time, 20 defaulted one year later. Then EDF1 year = 20/5000 = 0.04 = 0.4%
or 40 bp. The implied rating for this probability of default is BB+.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 32
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Example III.B.5.4: Federal Express ($ figures are in billions of US$)

This example is provided by KMV and relates to Federal Express on two different dates:
November 1997 and February 1998.
November 1997 February 1998
Market capitalisation $ 7.7 $ 7.3
(price × shares outstanding)
Book liabilities $ 4.7 $ 4.9
Market value of assets $ 12.6 $ 12.2
Asset volatility 15% 17%
Default point $ 3.4 $ 3.5
12.6  3.4 12.2  3.5
Distance to default (DD) 4.9 4.2
0.15 u 12.6 0.17 u 12.2
EDF 0.06% (6 bp) { AA– 0.11% (11 bp) { A–

This example illustrates the main causes of changes for an EDF, i.e. variations in the stock price,
the debt level (leverage ratio), and asset volatility (i.e. the perceived degree of uncertainty
concerning the value of the business).

III.B.5.6.4 EDF as a Predictor of Default


KMV has provided a ‘Credit Monitor’ service for estimated EDFs since 1993. EDFs have proved
to be a useful leading indicator of default, or at least of the degradation of the creditworthiness of
issuers. When the financial situation of a company starts to deteriorate, EDFs tend to shoot up
quickly until default occurs, as shown in Figure III.B.5.12. Figure III.B.5.13 shows the evolution
of equity value and asset value, as well as the default point during the same period. On the
vertical axis of both graphs the EDF is shown as a percentage, together with the corresponding
Standard & Poor’s rating.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 33
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Figure III.B.5.12: EDF of a firm that defaulted versus EDFs of firms in various quartiles
and the lower decile

Source: KMV Corporation

Note: The quartiles and decile represent a range of EDFs for a specific credit class.

Figure III.B.5.13: Asset value, equity value, short- and long-term debt of a firm that
defaulted

Source: KMV Corporation

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 34
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Figure III.B.5.14: EDF of a firm that defaulted, versus Standard & Poor’s rating

EDF

Default

S&P Credit Rating

Source: KMV Corporation

KMV has analysed more than 2000 US companies that have defaulted or entered into bankruptcy
over the last 20 years. These firms belonged to a large sample of more than 100,000 company-
years with data provided by Compustat. In all cases KMV has shown a sharp increase in the slope
of the EDF a year or two before default. Changes in EDFs tend to anticipate – by at least one
year - the downgrading of the issuer by rating agencies such as Moody’s and Standard & Poor’s
(Figure III.B.5.14). Contrary to Moody’s and Standard & Poor’s historical default statistics, EDFs
are not biased by periods of high or low numbers of defaults. The distance to default can be
observed to shorten during periods of recession, when default rates are high, and to increase
during periods of prosperity characterised by low default rates.

The loss distribution is generated in a way similar to CreditMetrics by simulating correlated


defaults at the risk horizon, say one year.

III.B.5.7 The Actuarial Approach


In the structural models of default, default occurs when the asset value falls below a certain
boundary such as a promised payment (e.g., the Merton, 1974, framework). By contrast, the
actuarial model discussed in this section treat the firm’s bankruptcy process, including recovery,
as exogenous. CreditRisk+, released in late 1997 by investment bank Credit Suisse Financial
Products, is a purely actuarial model, based on mortality models of the insurance companies. This
means that the probabilities of default that the model employs are based on historical statistical
data of default experience by credit class.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 35
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Contrary to the structural approach to modelling default, the timing of default is assumed to take
the bond-holders ‘by surprise’. Default is treated as an ‘end of game’ (stopping time) which
comes as a surprise, and the probability of such a surprise is known and follow a Poisson type of
distribution.

CreditRisk+ applies an actuarial science framework to the derivation of the loss distribution of a
bond/loan portfolio. Only default risk is modelled; downgrade risk is ignored. Unlike the KMV
approach to modelling default, there is no attempt to relate default risk to the capital structure of
the firm. Also, no assumptions are made about the causes of default. It is assumed that:
1. for a loan, the probability of default in a given period, say one month, is the same as in
any other month;
2. for a large number of obligors, the probability of default by any particular obligor is
small, and the number of defaults that occur in any given period is independent of the
number of defaults that occur in any other period.

Under these assumptions, the probability distribution for the number of defaults during a given
period of time (say, one year) is represented well by a Poisson distribution:21

µn e µ
Pr(n defaults) = for n = 0,1,2,…, (III.B.5.13)
n!

where µ is the average number of defaults per year.

The annual number of defaults n is a stochastic variable with mean µ and standard deviation —µ.
The Poisson distribution has a useful property: it can be fully specified by means of a single
parameter, µ. For example, if we assume µ= 3 then the probability of ‘no default’ in the next
year is:

30 e 3
Pr(0 default) = 0.05 5%
0!

while the probability of exactly three defaults is:

33 e 3
Pr (3 defaults) = 0.224 22.4% .
3!

21 In any portfolio there is, naturally, a finite number of obligors, say n; therefore, the Poisson distribution, which
specifies the probability of n defaults, for n = 1, …, f, is only an approximation. However, if n is large enough, then
the sum of the probabilities of n + 1, n + 2, … defaults become negligible.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 36
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

However, we expect the mean default rate µto change over time, depending on the business
cycle. This suggests that the Poisson distribution can only be used to represent the default
process if, as suggested by CreditRisk+, we make the additional assumption that the mean default
rate is itself stochastic.

In the event of default by an obligor, the counterparty incurs a loss that is equal to the amount
owed by the obligor (its exposure, i.e. the marked-to-market value, if positive – and zero, if
negative – at the time of default) less a recovery amount. In CreditRisk+, the exposure for each
obligor is adjusted by the anticipated recovery rate in order to calculate the ‘loss given default’.
These adjusted exposures are calculated outside the model (exogenous), rather than determined
by it, and are independent of market risk and downgrade risk.

In order to derive the loss distribution for a well-diversified portfolio, the losses (exposures, net
of the recovery adjustments) are divided into bands. The level of exposure in each band is
approximated by means of a single number.

Example III.B.5.5:

Suppose the bank holds a portfolio of loans and bonds from 500 different obligors, with
exposures between $50,000 and $1 million.
Notation
Obligor A
Exposure LA
Probability of default PA
Expected loss OA = LA ×PA

In Table III.B.5.11 we show the exposures for the first six obligors.

Table III.B.5.11: Exposure per obligor

Exposure ($) Exposure Round-off exposure


Obligor (loss given default) (in $100,000) (in $100,000) Band
A LA ƭj ƭj j

1 150,000 1.5 2 2
2 460,000 4.6 5 5
3 435,000 4.35 4 4
4 370,000 3.7 4 4
5 190,000 1.9 2 2
6 480,000 4.8 5 5

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 37
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

The unit of exposure is assumed to be $100,000. Each band j, j = 1, …, m, with m = 10, has an
average common exposure: ƭj = $100,000 × j. In CreditRisk+, each band is viewed as an
independent portfolio of loans/bonds, for which we introduce the following notation:

Notation
Common exposure in band j in units of exposure ƭj
Expected loss in band j in units of exposure ƥj
Expected number of defaults in band j µj

Denote by ƥA the expected loss for obligor A in units of exposure, i.e. ƥA = ƫA/L where in this
case L = $100,000. Then ƥj, the expected loss over a one-year period in band j, expressed in units
of exposure, is simply the sum of the expected losses ƥAof all the obligors that belong to band j.
But since by definition ƥj = ƭjµj, the expected number of defaults per annum in band j may now
be calculated, using µj  ƥj/ƭj . Table III.B.5.12 provides an illustration of the results of these
calculations.

Table III.B.5.12: Expected number of defaults per annum in each band

Number of
Band: j ƥj µj
obligors
1 30 1.5 1.5
2 40 8 4
3 50 6 2
4 70 25.2 6.3
5 100 35 7
6 60 14.4 2.4
7 50 38.5 5.5
8 40 19.2 2.4
9 40 25.2 2.8
10 20 4 0.4

To derive the distribution of losses for the entire portfolio, we follow the three steps outlined
below.

Step 1: Probability generating function for each band


Each band is viewed as a separate portfolio of exposures. The probability generating function for
any band, say band j, is by definition:

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 38
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

v v nƭ j
¦ Pr( loss nL ) z ¦ Pr( n defaults ) z
n
Gj(z ) ,
n 0 n 0

where the losses are expressed in the unit of exposure. Since we have assumed that the number
of defaults follows a Poisson distribution, we have:
µ j
v e µnj
¦
vj
nv j µ j  µ j z
G j (z ) z e .
n 0 n!

Step 2: Probability generating function for the entire portfolio


Since we have assumed that each band is a portfolio of exposures that is independent of the other
bands, the probability generating function for the entire portfolio is simply the product of the
probability generating function for each band:
m
µ j µ j z
vj
§ m m ƭ ·
G( z ) –e exp¨¨  ¦ µ j  ¦ µ j z j ¸¸ (III.B.5.14)
j 1 © j 1 j 1 ¹

¦
m
where µ j 1
µ j denotes the expected number of defaults for the entire portfolio.

Step 3: Loss distribution for the entire portfolio


Given the probability generating function (III.B.5.14), it is straightforward to derive the loss
distribution, since

1 d nG( z )
Pr( loss of nL ) |z 0 for n 1, 2,... .
n! dz n
These probabilities can be expressed in closed form and depend only on ƥj and ƭj.22

Credit VaR is then easily derived from the above loss distribution by first computing the
percentile corresponding to the confidence level, and then subtracting the expected loss from this
number.

CreditRisk+ proposes several extensions of the basic one-period, one-factor model. First, the
model can be easily extended to a multi-period framework. Second, the variability of default
rates can be assumed to result from a number of ‘background’ factors, each representing a sector
of activity. Each factor k is represented by a random variable Xk which is the number of defaults
in sector k, and which is assumed to be gamma distributed. The mean default rate for each
obligor is then supposed to be a linear function of the background factors, Xk. These factors are

22 See Credit Suisse (1997), p.26.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 39
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

further assumed to be independent. In all cases, CreditRisk+ derives a closed-form solution for
the loss distribution of a bond/loan portfolio.

CreditRisk+ has the advantage that it is relatively easy to implement. First, as we mentioned
above, closed-form expressions can be derived for the probability of portfolio bond/loan losses,
and this makes CreditRisk+ very attractive from a computational point of view. In addition,
marginal risk contributions by obligor can be easily computed. Second, CreditRisk+ focuses on
default, and therefore it requires relatively few estimates and ‘inputs’. For each instrument, only
the probability of default and the exposure are required.

Its principal limitation is the same as for the CreditMetrics and KMV approaches: the
methodology assumes that credit risk has no relationship with the level of market risk. In
addition, CreditRisk+ ignores what might be called ‘migration risk’; the exposure for each obligor
is fixed and is not sensitive to possible future changes in the credit quality of the issuer, or to the
variability of future interest rates. Even in its most general form, where the probability of default
depends upon several stochastic background factors, the credit exposures are taken to be
constant and are not related to changes in these factors. Finally, like the CreditMetrics and KMV
approaches, CreditRisk+ is not able to cope satisfactorily with non-linear products such as
options and foreign currency swaps.

III.B.5.8 Summary and Conclusion


In this chapter we have presented the key features of some of the more prominent new models
of credit risk measurement in a portfolio context. At first sight, these approaches appear to be
very different and likely to produce considerably different loan loss exposures and VaR figures.
However, analytically and empirically, these models are not as different as they may first appear.
Indeed, similar arguments stressing the structural similarities have been made by several authors
such as Gordy (2000) and Koyluoglu and Hickman (1999).

Comparative studies such as IIF/ISDA (2000) have stressed the sensitivity of the risk measures
(expected and unexpected losses) to the key risk drivers, i.e. probabilities of default, loss given
default and default correlations. This study also showed that when these models are run using
consistent parameters, they produce results which fall in quite a narrow range.

References
Altman, E I, and Kishore, V (1996) Almost everything you wanted to know about recoveries on
defaulted bonds. Financial Analysts Journal, 52(6), pp. 57–64.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 40
The PRM Handbook – III.B.5 Portfolio Models of Credit Loss

Altman, E I, and Kishore, V (1998) Defaults and returns on high yield bonds: analysis through
1997. Working Paper S-98-1, New York University Salomon Center.

Black, F, and Scholes, M (1973) The pricing of options and corporate liabilities. Journal of Political
Economy, 81, pp. 637–654.

Carty, L, and Lieberman, D (1996) Defaulted Bank Loan Recoveries. Moody’s Investors Service,
Global Credit Research, Special Report.

Credit Suisse (1997) CreditRisk+: A Credit Risk Management Framework. New York: Credit Suisse
Financial Products.

Crouhy, M, Galai, D, and Mark, R (2001) Risk Management. New York: McGraw-Hill.

Elton, E J, and Gruber, M J (1995) Modern Portfolio Theory and Investment Analysis. New York:
Wiley.

Fishman G (1997) Monte Carlo: Concepts, Algorithms, and Applications. New York: Springer Seris in
Operations Research, Springer.

Galai, D, and Masulis, R W (1976) The option pricing model and the risk factor of stocks. Journal
of Financial Economics, 3 (January/March), pp. 53–82

Glasserman, P, Heidelberger, P, and Shahabuddin, P (2000) Variance reduction technique for


estimating value-at-risk. Management Science, 46, pp. 1349–1364.

Gordy, M (2000) A comparative anatomy of credit risk models. Journal of Banking and Finance, 1,
pp. 119–149.

IIF/ISDA (2000) Modeling credit risk: Joint IIF/ISDA testing program. February.

J.P. Morgan (1997) CreditMetrics. Technical Document.

Koyluoglu, H U, and Hickman, A (1998) A generalized framework for credit risk portfolio
models.

Lucas, D (1995) Default correlation and credit analysis, Journal of Fixed Income, 4(4), pp. 76-87.
Working Paper. New York: Oliver, Wymann and Co., July.

Merton, R C (1974) On the pricing of corporate debt: the risk structure of interest rates. Journal of
Finance, 28, pp. 449–470.

Rudd, A, and Clasing Jr, H K (1988) Modern Portfolio Theory: The Principles of Investment Management.
Orienda, CA: Andrew Rudd.

Vasicek, O (1997) Credit valuation, Net Exposure, 1(1).

Wilson, T (1997a) Portfolio credit risk I, Risk, 10(9), pp. 111-117.

Wilson, T (1997b) Portfolio credit risk II, Risk, 10(10), pp. 56-61.

Copyright© 2004 M.Crouhy, D. Galai, R. Mark and the Professional Risk Managers’ International Association 41
The PRM Handbook – III.B.6 Credit Risk Capital

III.B.6 Credit Risk Capital Calculation


Dan Rosen1

III.B.6.1 Introduction
As discussed in Chapter III.0, the primary role of capital in a bank, apart from the transfer of
ownership, is to act as a buffer to absorb large unexpected losses, protect depositors and other
claim holders and provide confidence to external investors and rating agencies on the financial
health of the firm. In contrast, regulatory capital refers to the minimum capital requirements
which banks are required to hold, based on regulations established by the banking supervisory
authorities. From the perspective of the regulator, the objectives of capital adequacy requirements
are to safeguard the security of the banking system and to ensure its ongoing viability, and to
create a level playing field for internationally active banks.

While the role of economic capital (EC), in general, is to act as a buffer against all the risks that
may force a bank into insolvency, economic credit capital (ECC) can be viewed as a buffer
against those risks specifically associated with obligor credit events such as default, credit
downgrades and credit spread changes. In this chapter, we review the main concepts for
estimating and allocating ECC as well as the current regulatory framework for credit capital.
Chapter III.0 presents the chronology and describes the basic principles behind regulatory capital
and gives an overview of the Basel Accord, the framework created by the Basel Committee on
Banking Supervision (BCBS), which is now the basis for banking regulation around the globe.
We focus in this chapter on the main principles for computing minimum regulatory credit capital
requirements under the current Basel I Accord as well as under the current proposal for the new
Basel accord, Basel II.

In this chapter you will learn:


x how credit portfolio models must be defined and parameterised consistently to measure
ECC from a bottom-up approach;
x the basic rules for computing minimum credit capital under the Basel I Accord;
x the rules for computing minimum capital requirement for credit risk in the Basel II
Accord (Pillar I) – we cover various types of exposure and comment on the special
credit capital considerations under Pillar II;

1 Algorithmics Inc.

Copyright © 2004. D. Rosen and the Professional Risk Manager’s International Association.
The PRM Handbook – III.B.6 Credit Risk Capital

x the Basel II treatment for internal ratings systems and probability of default estimation,
as well as the minimum standards for credit monitoring processes and validation
methods.

Finally, Section III.B.6.7 covers some advanced topics: the applications of risk contribution
methodologies ECC allocation, as well as the shortcomings of value-at-risk (VaR) for ECC and
coherent risk measures.

III.B.6.2 Economic Credit Capital Calculation


ECC acts as a buffer that provides protection against the credit risk (potential credit losses) faced
by an institution. Traditionally, capital is designed to absorb unexpected losses up to a certain
confidence level, while credit reserves are set aside to absorb expected losses. As explained in Chapter
III.0, the methodologies to estimate EC at the firm level can be generally classified into top-down
or bottom-up approaches. In general, top-down approaches do not readily allow the decomposition
of capital into its various constituents, such as market risk, credit risk and operational risk.
Bottom-up approaches provide risk-sensitive measures of capital, which allow us to understand
and manage these risks better.

In this section, we discuss how credit portfolio models are applied to measure ECC from a
bottom-up approach. The credit portfolio models that were described in Chapter III.B.5 provide
the statistical distributions of potential credit losses of a portfolio over a given horizon. Hence,
they offer the key tools to estimate the required ECC from a bottom-up approach. Chapter
III.B.5 covers the fundamentals of credit portfolio models, as well as the main models used in
industry.

III.B.6.2.1 Economic Capital and the Credit Portfolio Model


A credit portfolio model must be defined and parameterised consistently with the definition of
economic capital used by the firm. In particular, when devising an economic capital framework,
we must explicitly consider the time horizon or holding period, the definition of credit losses, and
the quantile (and definition of ‘unexpected’ losses) covered by capital. The definitions of these
three parameters are tightly linked to the actual definition of capital used by the firm.

III.B.6.2.1.1 Time Horizon


While trading activities tend to involve short time horizons (a few hours to a few days), credit
activities generally involve longer horizons (a few months to a few years). Operational risk and
insurance activities generally involve even longer horizons, spanning several years. It is common

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 2
The PRM Handbook – III.B.6 Credit Risk Capital

practice for credit VaR measurement to assume a one-year horizon, although longer periods are
sometimes employed.2 Several reasons are cited for a one-year horizon, among them the
following:
x It accords with the firm’s main accounting cycle.
x It is a reasonable period over which the firm will typically be able to renew any capital
depleted through losses.
x It coincides with a reasonable period over which actions can be taken to mitigate losses
for various credit assets.
x Credit reviews are usually performed annually.
x The borrower might be updating its financial information only on an annual basis.

III.B.6.2.1.2 Credit Loss Definition


Accounting rules, regulatory guidance and management policy combine to determine when a loss
will be recognised. Loss recognition is straightforward in some instances, for example, for trading
securities that are marked-to-market regularly or operational losses that are recognised when they
occur. In these instances, there is usually little question as to the function of economic capital.
Management has more leeway in other instances. With regards to credit exposures, management
may choose to measure economic capital against default events only (default mode) or against both
default and credit migration events (mark-to-market mode):

(i) Default-only credit losses. In this case we are only concerned whether a loan (or other instrument)
would be repaid or not. This type of measurement is currently the most prevalent, typically used
for traditional (accrual accounted, hold-to-maturity) banking book activities. Credit portfolio
models require several building blocks, or components, at the enterprise level. These include the
following:
x Estimates of the probabilities of default (PD), loss given default (LGD), and exposures
at default (EAD) for individual corporate, banking and sovereign exposures; similarly,
such estimates are required for homogeneous buckets of retail or small- and medium-
sized enterprise (SME) exposures.
x A portfolio model with specific assumptions about the co-dependence of
o credit events (e.g. asset values or PD correlations)
o exposures, LGDs and credit events.

2 Note that the optimal time horizons for given portfolios might vary. However, when applied on an enterprise basis, it
is important to set a common methodology to estimate the desired capital. Also, while it is most common to apply
credit portfolio models in a single-step setting, the multi-step applications are becoming increasingly popular. Finally,
the choices of time horizon and credit loss definition can be tightly linked.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 3
The PRM Handbook – III.B.6 Credit Risk Capital

(ii) Mark-to-market credit losses. In this case, the models used are closer to market risk models,
capturing the losses due to defaults and the changes in mark-to-market due to deterioration in
credit quality (or credit migration). Therefore, in addition to information needed for default-only
models (PDs, EADs, LGDs and correlations), they require
x market pricing information (e.g. credit spreads) and
x more general information on the instruments structures for marking to market.

Mark-to-market models are more commonly used for portfolios which are not held to maturity,
or for which reliable pricing information might be available, such as bond and credit derivatives
portfolios.

The choice of loss definition and time horizon is tightly linked. For example, consider credit risk
in lending activities. If capital is held against defaults only, it makes sense that the horizon should
be the entire life of the loan. If capital is held against deterioration in value, then some time
period would be selected, independent of the life of the underlying loans (e.g. one year). Both
choices allow for longer-maturity instruments to carry higher capital charges.

III.B.6.2.1.3 Quantile of the Loss Distribution


Credit VaR should capture the ECC that shareholders should invest, in order to limit the
probability of default of the firm over a given horizon. This involves defining a high,
predetermined quantile of the credit loss distribution so that shareholders will be ‘highly
confident’ that credit losses will not exceed this amount. The quantile, which defines the
‘confidence interval’, is chosen to provide the right level of protection to debt holders and hence
achieve a desired rating, as well as providing the necessary confidence to other claim holders,
such as depositors.3 The horizon and quantile are thus key policy parameters set by senior
management and the board of directors. For example, a bank that wishes to be consistent with
an AA rating from an external credit agency might chose a 99.97% confidence level over a one-
year horizon, since the one-year default probability of an AA-rated firm is 0.03%.

III.B.6.2.2 Expected and Unexpected Losses


In its most common definition, economic capital is designed to absorb only unexpected losses (UL)
up to a certain confidence level. Credit reserves are traditionally set aside to absorb expected losses
(EL) during the life of a transaction. Thus, it is common practice to estimate economic capital as
the chosen confidence interval minus the estimated expected EL. The key rationale for
subtracting EL is that credit products are already priced such that net interest margins less non-

3 But note there can be a trade-off between achieving this objective and providing high returns on capital for
shareholders.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 4
The PRM Handbook – III.B.6 Credit Risk Capital

interest expenses are at least enough to cover estimated EL (and must also cover a desired return
to capital). Therefore, in this case, the credit VaR measure that is relevant to estimating EC
covers only unexpected losses:
Credit VaRơ ( L ) Qơ ( L )  EL , (III.B.6.1)

where Qơ ( L ) denotes the ơ% quantile (e.g. 0.03%) of the portfolio loss distribution at the
horizon.

In Chapter III.0 we show that subtracting ‘EL’ as given by the expected end-of-period value
from the worse-case losses represents a simplifying approximation to estimate EC. This is the
approach commonly taken by practitioners and generally leads to conservative estimates.4 More
precisely, the credit VaR measure appropriate for EC should in fact measure loss relative to the
portfolio’s initial mark-to-market (MtM) value and not relative to the EL in its end-of-period
distribution.5 This highlights the importance for banks of moving towards full MtM portfolio
credit risk models and of establishing accurate estimates of MtM values of credit portfolios. Also,
the credit VaR measure normally ignores the interest payments that must be made on the funding
debt. These payments must be added explicitly to the EC measure. The estimation of the interest
compensation calculation and the credit MtM value of the portfolio generally require the use of
an asset pricing model.

III.B.6.2.3 Enterprise Credit Capital and Risk Aggregation


A large firm, such as a bank, acquires credit risk through various businesses and activities,
including retail banking, commercial lending, bonds, derivatives and credit derivatives trading.
Such an institution is likely to have one methodology for its larger commercial loans and another
for its retail credits. In general, a bank may have any number of methodologies for various credit
segments. If each area is modelled separately, then the amounts of ECC estimated for each area
need to be combined. In making the combination, the firm needs to incorporate, either implicitly
or explicitly, various correlation assumptions. The choices of aggregation can be divided into
three main categories:
x Sum of stand-alone capital for each business unit or portfolio. This methodology
essentially assumes perfect correlation across business lines and does not allow for
diversification from them. It is consistent, in general, with one-factor portfolio models
(such as the Basel II underlying portfolio model).

4 For a detailed discussion, see Kupiec (2002).


5 Capital allocation credit VaR measures in this context can be negative.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 5
The PRM Handbook – III.B.6 Credit Risk Capital

x Ad hoc cross-business correlation. In order to allow for some cross-business


diversification, a firm might aggregate the individual stand-alone capital estimates using
analytical models and simple cross-business (asset) correlation estimates.
x Full enterprise credit portfolio modelling. Current multi-factor credit models and
technology allow the computation of credit loss distributions of large enterprise
portfolios. This can be accomplished through semi-analytical methods (e.g. fast Fourier
transforms, saddlepoint methods), or full-blown Monte Carlo simulation. Various
financial institutions today are starting to apply either in-house or commercial credit
portfolio models to compute ECC at the enterprise level.

III.B.6.3 Regulatory Credit Capital: Basel I


In this section you will learn the key principles of the Basel I Accord for computing minimum
capital requirements for credit risk. We further discuss some of its shortcomings and the
motivation for regulatory arbitrage.

III.B.6.3.1 Minimum Credit Capital Requirements under Basel I


The 1988 Basel I Accord focused mainly on credit risk, establishing minimum capital standards
that linked capital requirements to the credit exposures of banks (Basel Committee of Banking
Supervision 1988). Prior to its implementation in 1992, bank capital was regulated through
simple, ad hoc capital standards. While generally prescriptive, Basel I left various choices to be
made by local regulators, thus resulting in several variations of the implementation across
jurisdictions. The calculation of regulatory credit capital requirements has three steps: converting
exposures to credit-equivalent assets, computing loan equivalents, and applying the capital
adequacy ratio.

Step 1: Credit-Equivalent Assets


The objective in this step is to express all on-balance sheet and off-balance sheet credit exposures
in comparable numbers. This is achieved by converting off-balance sheet exposures into
equivalent credit assets, which better reflect the amounts that could be lost if a transaction were
to default. The general rules for this are as follows:

For contingent banking book assets, such as letters of credit, etc., asset equivalents are obtained by
multiplying the exposure by a percentage, which broadly reflects the likelihood of its conversion
into an actual exposure. For example, the conversion factor of an undrawn standby letter of
credit is 50%.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 6
The PRM Handbook – III.B.6 Credit Risk Capital

For derivatives in the trading book,6 asset equivalents are given by the instrument’s total exposure,
obtained through the so-called method of add-ons:
Total Exposure = Actual Exposure + Potential Exposure (III.B.6.2)
Actual Exposure = max(0, Mark-to-Market)
Potential Exposure = Notional u Counterparty Add-on

The potential exposure attempts to capture the change in value of derivatives, resulting from
market fluctuations, which lead to higher credit losses should the counterparty default.
Counterparty add-ons to measure potential exposures are given prescriptively, for example as in
Table III.B.6.1.

Table III.B.6.1: Add-ons for derivatives exposures in Basel I


Exchange rate Precious metals Other
Residual maturity Interest rate Equity
and gold except gold commodities

One year or less 0.00% 1.00% 6.00% 7.00% 10.00%


Over one year to
0.50% 5.00% 8.00% 7.00% 12.00%
five years

Over five years 1.50% 7.50% 10.00% 8.00% 15.00%

Thus, for example,


x a three-year foreign exchange forward contract carries a total exposure given by its
current mark-to-market plus 5% of its notional;
x an interest-rate swap with remaining nine-month maturity is deemed to carry a 0% add-
on, and hence its total exposure is given by its current mark-to-market.

Basel I allows also for partial recognition of mitigation techniques such as netting when the
proper agreements are in place. In this case, actual exposure is computed as the netted mark-to-
market values of all transactions, and the total add-on is adjusted as follows:
Netted Add-on = Gross Add-on × (0.4 + 0.6 × NGR Ratio), (III.B.6.3)

where NGR denotes the net-to-gross ratio, that is, the netted mark-to-market of the transactions
with a counterparty divided by the gross mark-to-market (the sum of all the positive mark-to-
market transaction values).

6 This was originally defined in BCBS (1995).

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 7
The PRM Handbook – III.B.6 Credit Risk Capital

Example III.B.6.1
Consider a counterparty with three derivatives transactions as shown in Table III.B.6.2. With a
gross MtM of $500m and a netted MtM of $200m, the NGR is 0.4. Netting in this case reduces
the credit equivalent for this portfolio by (535 – 222.4)/535) = 58%.

Table III.B.6.2: Example – credit equivalents in the trading book


Residual Add-on Credit
Transaction Instrument Notional MtM Add-on
maturity (%) equivalent
1 IR swap 3 years 1000 500 0.50 5 505
2 FX forward 1.5 years 500 -100 5 25 25
3 FX option 5 months 500 -200 1 5 5

Gross 500 35 535


Netted 200 22.4 222.4
NGR ratio 0.4

Table III.B.6.3: Basel I risk weights (examples)

0% Cash
Claims on central governments, central banks denominated in national currency

0, 10, 20, Claims on domestic public-sector entities, excluding central government, and loans
or 50% guaranteed by securities issued by such entities

20% Claims on multilateral development banks


Claims on banks incorporated in the OECD and loans guaranteed by OECD
50% Loans fully secured by mortgage on residential property

100% Claims on private sector


Claims on banks incorporated outside the OECD with a residual maturity of over one year

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 8
The PRM Handbook – III.B.6 Credit Risk Capital

Step 2: Risk-Weighted Assets


Risk-weighted assets (RWAs) are obtained by multiplying the exposures (or credit equivalents) by
a risk weight. Risk weights broadly attempt to reflect the credit riskiness of the asset. Example
risk weights are given in Table III.B.6.3. Thus, a loan to a corporate, regardless of its credit rating,
carries a 100% risk weight, while a credit exposure to an OECD government has a 0% weight.

Step 3: Capital Adequacy Ratio – Minimum Capital Requirement


The minimum capital requirements are obtained by multiplying the sum of all the RWAs by the
capital adequacy ratio of 8% (also referred to as the Cook ratio):
§ ·
Capital ¨ ¦ RWAk ¸ u 8%. (III.B.6.4)
© k ¹

Example III.B.6.2
Following on from Example III.B.6.1, assume that the counterparty is a UK bank. Then the risk
weight is 20%, which leads to minimum capital requirements
Min capital requirements = $222.4m × 0.2 × 0.08 = $3.56m.

The reduction of regulatory capital from the application of netting for this portfolio is also 58%
(as with the credit equivalent).

III.B.6.3.2 Weaknesses of the Basel I Accord for Credit Risk


As mentioned in Chapter III.0, a great strength of Basel I is the simplicity of the framework,
which allowed it to be implemented in countries with different banking and accounting practices.
Its simplicity also has been its major weakness, as the accord does not effectively align regulatory
capital requirements closely with an institution’s risk. Some criticisms with regard to credit risk
include the following:
x Lack of credit quality differentiation. All corporate credits elicit a risk weight of 100%
regardless of their credit quality. Furthermore, high-rated corporate exposures may carry
much higher capital than low-rated sovereign exposures. For example, a loan to an AAA-
rated corporate such as GE carries five times more capital than a loan to a bank in Korea
or Turkey (20% risk weight). This indirectly provides incentives for banks to take bad
credits and avoid high-rated credits (with lower returns).
x Lack of proper maturity differentiation. For example, revolving credit exposures with a term
of less than one year do not get a regulatory capital charge. Similarly, a facility of 366
days bears the same capital charge as a long-term facility. This has led to very simple
regulatory arbitrage through the systematic creation of rolling 364-day facilities.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 9
The PRM Handbook – III.B.6 Credit Risk Capital

x Insufficient incentives for credit mitigations techniques. The accord does not fully recognise the
risk reduction achieved through credit mitigation techniques such as netting, collateral,
guarantees and credit derivatives.
x Lack of recognition of portfolio effects in credit risk. The accord does not provide any capital
benefits for diversification across assets and businesses.

III.B.6.3.3 Regulatory Arbitrage


The lack of differentiation in the accord, together with the financial engineering advances in
credit risk over the last decade, have led to the widespread development of regulatory capital
arbitrage. This refers to the process by which regulatory capital is reduced through instruments
such as credit derivatives or securitisation, without an equivalent reduction of the actual risk
being taken. Through regulatory arbitrage instruments, for example, banks typically transfer low-
risk exposures from their banking book to their trading book, or simply place them outside the
regulated banking system.

III.B.6.4 Regulatory Credit Capital: Basel II


We introduce the latest proposals of the current Basel II Accord for credit risk capital. In this
section you will learn about the Pillar I rules for computing minimum capital requirement for
credit capital in the Basel II Accord. We cover the standardised and internal ratings based
approaches for different types of exposures and conclude with a brief comment on the special
credit capital considerations under Pillar II of the new Accord.

III.B.6.4.1 Latest Proposal for Minimum Credit Capital requirements


In 1999, the BCBS issued a proposal for a new capital adequacy framework (Basel II or BIS II
Accord). The third consultative paper (CP3) on the new accord was released in April 2003
(BCBS, 2003). The Final version of the accord was published in June 2004 (BCBS, 2004).7 The
implementation of the accord will take effect between the end of 2006 and the end of 2007.

Basel II attempts to improve capital adequacy framework along two important dimensions:
x First, the development of a capital regulation that encompasses not only minimum
capital requirements but also supervisory review and market discipline.
x Second, a substantial increase the risk sensitivity of the minimum capital requirements.

7 A small number of open issues are still to be resolved during 2004.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 10
The PRM Handbook – III.B.6 Credit Risk Capital

The reader is referred to Chapter III.0 for a general discussion on the Basel II Accord. In this
section we summarise the key principles and formulae for the computation of minimum capital
requirements for credit risk under Pillar I of the new accord. For greater detail, the reader is
referred to the BCBS papers which can be found at www.bis.org .

As with Basel I, minimum capital requirements consist of three components:


1. definition of capital (no major changes from Basel I);
2. definition of RWA;
3. minimum ratio of capital/RWA (remains 8%).

Basel II proposes substantive changes to the treatment of RWAs for credit risk relative to Basel I.
It moves away from a one-size-fits-all approach through the introduction of three distinct
options for the calculation of credit risk. These approaches present increasing complexity and
risk-sensitivity. Banks and supervisors can thus select the approaches that are most appropriate to
the stage of development of banks’ operations and of the financial market infrastructure.

Similar to Basel I, total minimum capital requirements are obtained by multiplying the risk-
weighted assets by the capital adequacy ration of 8%, as in equation (III.B.6.4)8:
§ ·
Capital ¨ ¦ RWAk ¸ u 8% .
© k ¹

The calculation of RWAs can be done through two types of approach: the standardised approach
and the internal ratings based (IRB) approach. The IRB approach has two variants, called foundation
and advanced IRB.

III.B.6.4.2 The Standardised Approach in Basel II


This approach is similar to Basel I in that Basel II requires banks to slot their credit exposures
into supervisory categories based on observable characteristics of the exposures (e.g. whether it is
a corporate loan or a residential mortgage loan), and then establishes fixed risk weights
corresponding to each supervisory category. Important differences from Basel I include the
following:

x Use of external ratings. The standardised approach allows the use of external credit
assessments to enhance risk sensitivity. The risk weights for sovereign, interbank, and

8 In addition, in (BCBS, 2004) the committee introduced a scaling factor. Where aggregate capital is lower than under
Basel I, the new requirements must be scaled by a factor, currently estimated at 1.06, such that overall capital levels do
not fall.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 11
The PRM Handbook – III.B.6 Credit Risk Capital

corporate exposures are differentiated based on external credit assessments. For


sovereign exposures, these credit assessments may include those developed by OECD
export credit agencies or private rating agencies. The use of external ratings for
corporate exposures is optional. Where no external rating is applied to an exposure, the
approach mandates that in most cases a risk weighting of 100% be used (as in Basel I).
In such instances, supervisors are to ensure that the capital requirement is adequate
given the default experience of the exposure type. For example, for claims on
corporates the risk weights are given in Table III.B.6.4.

Table III.B.6.4: Standardised risk weights for corporate exposures


Credit assessment AAA to AA– A+ to A– BBB+ to BB– Below B– Unrated
Risk weight 20% 50% 100% 150% 100%

x Loans past-due. A loan considered past-due requires a risk weight of 150%, unless a
threshold amount of specific provisions has already been set aside against the loan.

x Credit mitigants. The approach recognises an expanded range of credit risk mitigants:
collateral, guarantees, and credit derivatives. The approach expands the range of eligible
collateral beyond OECD sovereign issues to include most types of financial
instruments. It also sets several approaches for assessing the degree of capital reduction
based on the market risk of the collateral instrument. Finally, it expands the range of
recognised guarantors to include all firms that meet a threshold external credit rating.

x Retail exposures. The risk weights for residential mortgage exposures are reduced relative
to Basel I, as are those for other retail exposures, which receive a lower risk weight than
that for unrated corporate exposures. In addition, some loans to SMEs may be included
within the retail treatment, subject to meeting various criteria.

Through several options, the standardised approach attempts to improve the risk sensitivity of
the RWAs. Basel II also provides a ‘simplified standardised approach’, where circumstances may
not warrant a broad range of options. Banks under the simplified methods must also comply
with the supervisory review and market discipline requirements of Basel II.

Example III.B.6.3
Based on Table III.B.6.4, a loan to a corporate obligor, with a AA rating from an external agency,
would have a capital requirement of one-fifth under the Basel II standardised approach

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 12
The PRM Handbook – III.B.6 Credit Risk Capital

compared to Basel I (a risk weight of 20% versus 100%). Similarly, a loan to an A-rated obligor
would see its capital requirement going to one-half (50% weight).

III.B.6.4.3 Internal Ratings Based Approaches: Introduction


In the IRB approaches, banks’ internal assessments of key risk drivers serve as primary inputs to
the capital calculation, leading to more risk-sensitive capital requirements. This is a substantial
difference from both Basel I and the standardised approach. However, the IRB approach does
not fully allow for internal portfolio models to calculate capital requirements. Instead, the risk
weights (and thus the capital charges) are determined through the combination of quantitative
inputs provided by banks and formulae specified by the accord. These formulae, or risk-weight
functions, are based on a simple credit portfolio model, and thus align more closely with modern
risk management techniques.

The IRB approach includes two variants: a foundation approach and an advanced approach. The
IRB approaches cover a wide range of portfolios, with the mechanics of the calculation varying
somewhat across exposure types. In the remainder of this section we
x present the key inputs and principles behind the regulatory risk-weight formulae, and
x highlight the differences between the foundation and advanced IRB approaches by
portfolio, where applicable.

We present these concepts first for wholesale exposures (corporate, bank and sovereigns); then
we briefly cover retail exposures and SMEs, as well as specialised lending and equity exposures.9

III.B.6.4.4 IRB for Corporate, Bank and Sovereign Exposures


The IRB calculation of RWAs relies on four quantitative inputs, referred to as the risk components:
x Probability of default (PD): the likelihood that the borrower will default over one year.
x Exposure at default (EAD): the loan amount that could be lost upon default; for loan
commitments this is the amount of the facility likely to be drawn if a default occurs.
x Loss given default (LGD): the proportion of the exposure that will be lost if a default
occurs.
x Maturity (M): the remaining economic maturity of the exposure.

Given these four inputs, the corporate IRB risk-weight function produces a capital requirement
for each exposure. The RWA for a given exposure is given by
RWA 12.5 ˜ EAD ˜ K . (III.B.6.5)

9 Risk weight functions represent the latest version of the Accord (BCBS, 2004)

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 13
The PRM Handbook – III.B.6 Credit Risk Capital

The capital requirement, K, is the minimum capital per unit exposure, and is given by 10
ª § N 1 ( PD )  R N 1 0.999 · º
K LGD ˜ « N ¨¨ ¸¸  PD » ˜ MF ( M , PD ) , (III.B.6.6)
¬« © 1 R ¹ »¼

where N(.) denotes the cumulative normal distribution and N–1(.) is its inverse. The formula for K
is based on a simple credit portfolio model, with some adjustments as follows:

x The term N(.) represents the 99.9% default losses of an infinitely granular homogeneous
portfolio of unit exposure and 100% LGD, under a one-factor Merton-type credit
model.11
x The term LGD ˜ ª¬ N .  PD º¼ denotes the unexpected default losses12 of the infinitely

granular portfolio (already adjusted for loss given default); that is, the 99.9% losses
minus the expected losses (EL = PD × LGD).
x The parameter R denotes the one-factor asset correlation for the homogeneous
portfolio in the credit portfolio model. It is obtained from a calibration exercise by the
BCBS:
§ 1  e 50 PD · § 1  e 50 PD ·
R 0.12 ¨ 50 ¸  0.24 ¨1  ¸ . (III.B.6.7)
© 1 e ¹ © 1  e 50 ¹

R is a decreasing function of the default probability ranging from 24% to 12%, with
higher-quality obligors showing higher systemic risk than lower-quality obligors. Figure
III.B.6.1 gives the correlation parameter R for corporate exposures (as well as for retail).
x The final component of the capital requirement in (III.B.6.6) is the maturity function,
MF, which is given by
1  ( M  2.5) ˜ b( PD )
MF ( M , PD ) (III.B.6.8)
1  1.5b( PD )

with b ( PD ) [0.11852  0.05478 ˜ log( PD )]2 (the function b is referred to as the maturity
adjustment). The maturity function MF is equal to one for loans of one-year maturity.
Obtained from a calibration exercise by the BCBS, MF empirically adjusts further the
default losses (given by N(.) ) to the MtM losses of loans of higher maturity than one
year.

10 Note that the 12.5 multiplier cancels the 8% term in the capital. This simply allows the RWA to be expressed
consistently with the Basel I formulae.
11 See Chapter III.B.5 for credit portfolio models and Gordy (2003) for a detailed treatment of this formula.
12 The original formulae in CP3 (BCBS, 2003) did not subtract expected losses. After a period of consultation on the
role of EL, the final version bases the risk weights exclusively on unexpected losses.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 14
The PRM Handbook – III.B.6 Credit Risk Capital

For corporate, bank and sovereign exposures the foundation IRB and advanced IRB approaches
differ primarily in terms of the inputs that are provided by a bank based on its own estimates and
those that have been specified by the supervisor. These differences are summarised in Table
III.B.6.5.

Table III.B.6.5: Foundation and advanced IRB for corporate, bank and sovereign
exposures

Data Input Foundation IRB Advanced IRB


Probability of Provided by bank based on own Provided by bank based on own estimates
default (PD) estimates

Loss given Supervisory values set by the Provided by bank based on own estimates
default (LGD) Committee

Exposure at Supervisory values set by the Provided by bank based on own estimates
default (EAD) Committee

Supervisory values set by the Provided by bank based on own estimates


Maturity (M) Committee, (with an allowance to exclude certain
or, at national discretion, provided by exposures)
bank based on own estimates (with an
allowance to exclude certain exposures)

Thus, all IRB banks must provide internal estimates of PD. In addition, advanced IRB banks
must provide internal estimates of LGD and EAD, while foundation IRB banks will make use of
supervisory values that depend on the nature of the exposure. For example, under the foundation
approach:
x LGD=45% for senior claims,
x LGD=75% for subordinated claims.

These initial LGDs are then adjusted to reflect eligible collateral and guarantees provided for
each transaction. In BCBS (2004), the committee decided to adopt a more stringent definition of
LGD, where LGDs must be determined based on economic downturn values, and not averages
over the cycle or current values.

A major element of the IRB framework pertains to the treatment of credit risk mitigants:
collateral, guarantees and credit derivatives. The LGD parameter provides a great deal of
flexibility to assess the potential value of credit risk mitigation techniques. For foundation IRB

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 15
The PRM Handbook – III.B.6 Credit Risk Capital

banks, therefore, the different supervisory LGD values reflect the presence of different types of
collateral. Advanced IRB banks have even greater flexibility to assess the value of different types
of collateral. With respect to transactions involving financial collateral, the IRB approach seeks to
ensure that banks are using a recognised approach to assess the risk that such collateral could
change in value, and thus a specific set of methods is provided, as in the standardised approach.

In the case of trading book exposures, Basel II outlines the same treatment of EADs as in Basel
I, calculating loan equivalents through the use of add-ons and allowing partial recognition of
netting and mitigation (e.g. as in equations (III.B.6.2) and (III.B.6.3)). However, practitioners and
industry associations like ISDA have pointed out the limitation of such a technique in terms of its
accuracy and risk sensitivity, its recognition of mitigation and natural offsets, and the over-
conservative capital it demands for trading exposures, compared to loans.13 As industry pressure
is building to allow for internal models for trading book EADs, the BCBS is currently revising
this topic.14

Advanced IRB banks will generally provide their own estimates of effective maturity for these
exposures, although there are some exceptions where supervisors can allow fixed maturity
assumptions. For foundation IRB banks, supervisors can choose on a national basis whether to
apply fixed maturity assumptions or to provide their own estimates of remaining maturity.

III.B.6.4.5 IRB for Retail Exposures


For retail exposures, there is only a single, advanced IRB approach and no foundation IRB
alternative. Retail exposures are classified into three primary product categories, with a separate
risk-weight formula for each:
x residential mortgages exposures (RMEs);
x qualifying revolving retail exposures (QRREs);
x other retail exposures (OREs).

The QRRE category refers to unsecured revolving credits, which include many credit card
relationships. The other retail category refers to all other non-mortgage consumer lending,
including exposures to small businesses.

13 For example, short of allowing for full portfolio models, Canabarro et al. (2003) propose to use as a loan equivalent
exposure for a given counterparty the expected positive exposure (EPE) derived from an internal model (e.g. from a
Monte Carlo simulation), perhaps increased by a small percentage.
14 The SEC in the USA has issued a proposed capital rule for broker-dealers, aligned with Basel II requirements, which
allows for internal models; see Securities and Exchange Commission (2004).

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 16
The PRM Handbook – III.B.6 Credit Risk Capital

The key inputs to the IRB retail formulae are PD, LGD and EAD, all of which are to be
provided by the bank based on its internal estimates (no maturity component). In contrast to
corporate exposures, these values are not estimated for individual exposures, but instead for
pools of similar exposures.

Given these three inputs, the retail IRB risk-weight function produces a specific capital
requirement for each homogeneous pool of exposures. The risk-weighted assets for a given
exposure are also given by expression (III.B.6.5).

The formula for the capital requirement K for all three retail product categories is
ª § N 1 ( PD )  R N 1 0.999 · º
K LGD ˜ « N ¨¨ ¸¸  PD » (III.B.6.9)
«¬ © 1 R ¹ »¼

where again N denotes the cumulative normal distribution and N–1 is its inverse. The term N(.) –
PD is the same as for corporate exposures: the 99.9% unexpected default losses of an infinitely
granular homogeneous portfolio of unit exposure and 100% LGD. As there is no maturity
adjustment for retail exposures, capital only covers default risk.

The parameter R denotes the one-factor asset correlation for the homogeneous portfolio and is
different for each product category, according to a calibration exercise by the BCBS:
x residential mortgages R 15%
x QRREs 15 R 4%
§ 1  e 35 PD · ª 1  e 35 PD º
x other retail R 0.03 ¨ 35 ¸  0.16 «1  1  e 35 »
© 1 e ¹ ¬ ¼

Figure III.B.6.1 gives the correlation parameter R for retail as well as corporate exposures.
Correlations are in practice smaller for retail than wholesale. Corporate exposures have declining
correlations ranging from 24% to 12% . Retail correlations are generally smaller, flat at 4% and
15% for QRREs and mortgages, and from 16% to 3% for other retail.

15 In CP3, the correlation for QRREs was originally given by


§ 1  e 50 PD · ª 1  e 50 PD º
0.02 ¨ 50 ¸
 0.11 «1 
1  e 50 ¼»
R
© 1 e ¹ ¬
Industry groups recommended revisions for the retail correlation curves based on best practices (e.g. RMA, 2003),
suggesting that correlations for retail exposures were lower and not dependent on PD, as with wholesale exposures. In
response, the final version of the accord uses a flat 4% correlation.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 17
The PRM Handbook – III.B.6 Credit Risk Capital

Figure III.B.6.1: Correlation parameter, R

0.24 Corporate

0.2 Mortgages

0.16 QRRE -CP3


Correlation

Other Retail
0.12
QRRE - Final
0.08
0.04
0
0 0.05 0.1 0.15 0.2 PD

In (III.B.5.9), as with wholesale, EL = LGD u PD is excluded from minimum capital, but is to


be monitored through rules that determine the value of a bank’s total regulatory capital. 16

III.B.6.4.6 IRB for SME Exposures


Exposures not classified as purely ‘retail’, but with turnover of less than €50m, are classified as
SMEs. They are further divided into:
x retail SMEs, where exposures are less than €1m (they may be treated as retail exposures
if they are managed by the bank in the same way as retail exposures);
x corporate SMEs, where the exposure is more than €1m.

SMEs that fall into the corporate approach will apply the corporate IRB risk-weight formula,
with an optional firm-size adjustment, which leads to a discount in minimum capital. The value
of the adjustment is a function of turnover of the borrower (up to €50 million). The average
discount will be about 10%, and can range between 1% and 20%. Within this range, a mid-sized
firm with a 2% PD would be weighted at 100%.

Retail SMEs can be treated using the retail IRB formulae. This requires showing that their
exposures are below the €1m threshold and that the bank indeed manages them as aggregated
retail (lower-risk, small, diversified loans managed on a pooled basis).

16 In this sense, banks will be required to compare loan loss provisions (general and specific) to EL, and deduct any
shortfall from its capital; excess provisions are credited through Tier 2 capital.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 18
The PRM Handbook – III.B.6 Credit Risk Capital

III.B.6.4.7 IRB for Specialised Lending and Equity Exposures


Specialised lending is associated with the financing of individual projects where the repayment is
highly dependent on the performance of the underlying pool or collateral. Two options are given
to treat these exposures:
x For all but one of the specialised lending subcategories, if banks can meet the minimum
criteria for the estimation of the relevant inputs, they can use the corporate IRB
framework to calculate risk weights. For ‘high-volatility commercial real estate’
(HVCRE), IRB banks that can estimate the required inputs will use a separate risk weight
that is more conservative than the general corporate risk weight.
x Since the hurdles for meeting these criteria for this set of exposures may be more
difficult in practice, CP3 also includes an additional option that only requires that a bank
classify such exposures into five distinct quality grades, and provide a specific risk weight
for each of these grades.

IRB banks must separately treat their equity exposures. Two distinct approaches are given:
x The first approach builds on the PD/LGD approach for corporate exposures and
requires banks to provide their own PD estimates for the associated equity exposures.
This approach, however, mandates the use of a 90% LGD value and also imposes
various other limitations, including a minimum risk weight of 100% in many
circumstances.
x The second approach provides the opportunity to model the potential decrease in the
market value of equity holdings over a quarterly holding period. A simplified version of
this approach with fixed risk weights for public and private equities is also included.

III.B.6.4.8 Comments on Pillar II


Pillar II of the Basel Accord (supervisory review) is based on a series of guiding principles which
point to the need for banks to assess their capital adequacy positions relative to their overall risks,
and for supervisors to review and take appropriate actions in response to those assessments.
Important new components of Pillar II for credit risk include the treatment of the following:
x Stress testing. Banks adopting the IRB approach to credit risk will be required to perform a
meaningfully conservative stress tests of their own design to estimate the potential
increases in capital requirements during a stress scenario. Stress-test results are to be
used as a means of ensuring that banks hold a sufficient capital buffer to protect against
adverse or uncertain economic conditions. To the extent that there is a capital shortfall,
supervisors may, for example, require a bank to reduce its risks or increase its existing
capital resources to cover its minimum capital requirements plus the results of a
recalculated stress test.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 19
The PRM Handbook – III.B.6 Credit Risk Capital

x Concentration risk. Minimum capital risk weights assume that the portfolio is large and well
diversified. Within Pillar II, banks will need to assess the degree of concentration risk
they incur.
x Residual risks arising from the use of collateral, guarantees and credit derivatives, as well as specific
securitisation exposures (such as significant risk transfer and considerations related to the
use of call provisions and early amortisation features).

III.B.6.5 Basel II: Credit Model Estimation and Validation


In this section we broadly introduce the Basel II methodology for probability of default
estimation, the distinction between point-in-time and through-the-cycle ratings, the minimum
standards for credit monitoring processes and the validation methods.

III.B.6.5.1 Methodology for PD Estimation


Basel II requires that PDs be derived from a two-stage process:
1. Each obligor must be classified into a risk bucket, corresponding to its internal rating
grade. Obligors within a bucket share the same credit quality as assessed by the bank’s
internal credit rating system, which must be based on clear rating criteria.
2. A pooled PD is calculated for each bucket. For minimum capital calculations, this PD is
assigned to each obligor in a given bucket. Pooled PDs must be long-run averages of
one-year realised default rates for borrowers in the bucket.

III.B.6.5.2 Point-in-Time and Through-the-Cycle Ratings


Since pooled PDs are assigned to rating buckets (rather than to individual obligors directly), they
can differ meaningfully from individual obligors’ PDs that result from a forecasting model. Thus,
the credit ratings approach can have a substantial effect on the on the minimum capital
requirements. In this sense, credit rating approaches can be classified into two categories (see, for
example, BCBS, 2000):
x In a point-in-time (PIT) rating approach, obligors are classified into rating grades based on
the best available current credit quality information. The internal rating reflects an
assessment of the borrower’s current condition and/or most likely future condition over
the time horizon. Thus, a rating changes as the borrower’s conditions changes over the
credit/business cycle. In general, PIT ratings tend to rise during business expansions as
obligors’ creditworthiness improve and tend to fall during recessions.
x In a through-the-cycle (TTC) rating approach, obligors are assigned to rating grades based
on their ability to remain solvent over a full business cycle or during stress events. A
borrower’s riskiness assessment is thus based on a worst-case, ‘bottom of the cycle’

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 20
The PRM Handbook – III.B.6 Credit Risk Capital

scenario. Since they emphasise stress conditions, borrowers’ ratings would tend to be
more stable over the credit/business cycle.

In practice, there is large variation between banks’ rating approaches. Furthermore, the terms PIT
and TTC are often defined poorly and used differently across institutions. While not explicitly
requiring that rating systems be PIT or TTC, Basel II does hint at a preference for TTC
approaches.17

III.B.6.5.3 Minimum Standards for Quantification and Credit Monitoring Processes


Basel II gives banks great flexibility in determining how obligors are assigned to buckets, but it
establishes minimum standards for their credit monitoring processes. Banks can rely on their own
internal data, or data derived from external sources as long as they can demonstrate the relevance
of such data to their own exposures. In practical terms, banks will be expected to have in place a
process that enables them to collect, store and utilise loss statistics over time in a reliable manner.

Other minimum standards include the following:


x Internal rating systems should accurately and consistently differentiate degrees of risk.
x Banks must define clearly and objectively the criteria for their rating categories.
x A strong control environment must be in place to ensure that banks’ rating systems
perform as intended and that the resulting ratings are accurate.
x Banks must have independent and transparent ratings processes and internal reviews.

III.B.6.5.4 Validation of Estimates


Basel II requires that banks must have a robust system in place to validate the accuracy and
consistency of rating systems, processes, and the estimation of all relevant risk components.
While the accord does not go into methodological details, regulators broadly describe two
empirical approaches for validating PDs:
x Benchmarking involves comparing reported PDs for similar obligors across banks and
other external systems. Thus, banks that do not estimate PDs effectively or
systematically misrepresent them will report substantial differences with respect to their
peers.

17 From BCBS (2004, paragraph 415): ‘A borrower rating must represent the bank’s assessment of the borrower’s
ability and willingness to contractually perform despite adverse economic conditions or the occurrence of unexpected
events. For example, a bank may base rating assignments on specific, appropriate stress scenarios. Alternatively, a bank
may take into account borrower characteristics that are reflective of the borrower’s vulnerability to adverse economic
conditions or unexpected events, without explicitly specifying a stress scenario. The range of economic conditions that
are considered when making assessments must be consistent with current conditions and those that are likely to occur
over a business cycle within the respective industry/geographic region.’

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 21
The PRM Handbook – III.B.6 Credit Risk Capital

x Backtesting compares the pooled PD for a grade with the actual observed (out of sample)
default frequencies for that grade. If a grade’s pooled PD is truly an estimate of the long-
run average of the grade’s observed yearly default frequencies, over time the two should
converge. However, in practice, this convergence could take many years, thus providing
important technical challenges.

III.B.6.6 Basel II: Securitisation


A securitisation is a financial structure where cash flows from an underlying pool of exposures are
used to service one or more stratified positions, or tranches, reflecting different degrees of credit
risk. Payments of the structure depend on the performance of the underlying exposures. In a
traditional securitisation, the underlying pool commonly contains standard credit instruments such as
bonds or loans. In contrast, a synthetic securitisation uses credit derivatives or guarantees in a funded
(e.g. credit-linked notes) or an unfunded (e.g. credit default swaps) way. Common examples of
securitisation structures are collateralised bond obligations, collateralised loan obligations and
asset-backed securities. Securitisation by its very nature relates to the transfer of risks associated
with the credit exposures of a bank to other parties. In this respect, it provides better risk
diversification and contributes to enhancing financial stability. See Chapter I.B.6, and Section
I.B.6.6 in particular, for more details on these securitisation structures.

Some securitisations have enabled banks under the current Basel I Accord to avoid maintaining
capital commensurate with the risks to which they are exposed. This is commonly referred to as
regulatory arbitrage: the avoidance of minimum regulatory capital charges through the sale or
securitisation of a bank’s assets for which the true risk (and hence economic capital) is much
lower than regulatory capital. In contrast, Basel II provides a specific treatment for securitisation,
which requires banks to look to the economic substance of a securitisation transaction when
determining the appropriate capital requirement in both the standardised and IRB treatments.

Under the standardised approach, banks must assign risk weights prescribed by the accord
according to various criteria such as a facility type and its external rating (if available). Banks that
apply the standardised approach to the type of exposures securitised must also use it under the
securitisation framework. Some examples of capital charges under the standardised approach are
as follows:
x A structure with a long-term rating of BBB is assigned a capital of 8% times its notional
while an A-rated is assigned 4% (100% and 50% risk weights, respectively).
x Most unrated structures have a capital factor of 100% (exceptions might be some most
senior tranches).

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 22
The PRM Handbook – III.B.6 Credit Risk Capital

x Eligible liquidity facilities satisfying some basic criteria might be assigned a lower capital
by means of a credit conversion factor (CCF) which is less than 100% (total capital is the
product of the notional, the capital factor and the CCF).

Banks applying the IRB approach for the type of exposures securitised must also apply it to
securitisations. The IRB approach proposes three methods for calculation:
x Ratings-based approach (RBA). In this case capital factors are tabulated based on a tranche’s
credit rating and thickness, as well as the granularity of the underlying pool. Thus, for
example, a AAA thick tranche with granular pool is assigned 0.56% capital (a 7% risk
weight), while a BB+ tranche gets 20% capital (250% risk weight).
x Internal assessment approach (IAA). Subject to meeting various operational requirements, a
bank may use its internal assessments of the credit quality of the securitisation exposures
that it extends to asset-backed commercial paper programmes (e.g. liquidity facilities and
credit enhancements). The internal assessment is then mapped to an equivalent external
rating, which is used to determine the appropriate risk weights under the RBA.
x Supervisory formula approach (SFA). This approach is an attempt to provide a closed-form
capital charge, based on a bottom-up risk assessment of the structure. The methodology
used to determine the formula is based on similar mathematical modelling of the
problem to that used to derive the IRB capital charge of individual exposures (see
equation (III.B.6.6)). In essence, the SFA is based on five bank-supplied parameters:
o Kirb – the capital charge of the underlying pool of exposures, had the assets not
been securitised;
o L – the credit enhancement supporting a given tranche (i.e. how big is the buffer
absorbing credit losses, before they hit the tranche);
o T – the thickness of the tranche;
o N – the effective number of exposures in the pool;
o LGD – the exposure-weighted average LGD of the pool.

The reader is further referred to various BCBS documents on securitisation; those interested in
the mathematics of the SFA are referred to Gordy and Jones (2003) and Pykhtin and Dev (2002
and 2003).

Basel II prescribes the use of these three approaches in a hierarchical manner. The RBA must be
applied to securitisation exposures that are rated, or where a rating can be. Where an external or
an inferred rating is not available, either the SFA or the IAA must be applied. Securitisation
exposures to which none of these approaches can be applied must be deducted.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 23
The PRM Handbook – III.B.6 Credit Risk Capital

III.B.6.7 Advanced Topics on Economic Credit Capital18


In this section we review
x the application of credit risk contribution methodologies for ECC allocation, and
x the shortcomings of VaR for ECC and coherent risk measures.

III.B.6.7.1 Credit Capital Allocation and Marginal Credit Risk Contributions


In this section, we briefly highlight key issues on the application of the methodologies introduced
in Section III.0.4.2 for computing marginal credit risk contributions for ECC allocation.

In addition to computing the total ECC for portfolio, it is important to develop methodologies
to attribute this capital a posteriori to various sub-portfolios such as the firm’s activities, business
units and even individual transactions and allocate it a priori in an optimal fashion, to maximise
risk-adjusted returns. EC allocation down the portfolio is required for management decision
support and business planning, performance measurement and risk-based compensation, pricing,
profitability assessment and limits, building optimal risk–return portfolios and strategies.

There is no unique method to allocate ECC. Section III.0.4.2 classifies EC contributions into
stand-alone contributions, incremental contributions, and marginal contributions.19 Every
methodology has its advantages and disadvantages, and might be more appropriate for a
particular managerial application.

The most common approach used today to attribute ECC on a diversified basis is based on the
marginal contribution to the volatility (or standard deviation) of the portfolio losses. Such
allocations are generally ineffective for credit risk, since loss distributions are far from normally
distributed, producing inconsistent capital charges, and in some cases a loan’s capital charge can
even exceed its exposure (see Praschnik et al., 2001; Kalkbrener et al., 2004).

Given the definition of ECC, the natural choice for allocating capital is the risk contributions to a
VaR-based measure. However, VaR has several shortcomings since it is not a coherent risk
measure (see Section III.B.6.7.2). Specifically, while VaR is sub-additive for normal distributions,
this is not true in general. This limitation is particularly relevant for credit losses, which may be
far from normal and not even smooth.20 Furthermore, the pointwise nature of VaR has generally
lead to difficulties in computing accurate and stable risk contributions with simulation. Recently,

18 This section is not mandatory for the exam, but is added for completeness.
19 The reader is reminded that there is currently no universal terminology for these methodologies in the literature.
20 The discreteness of individual credit losses leads to non-smooth profiles and marginal contributions.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 24
The PRM Handbook – III.B.6 Credit Risk Capital

several authors have proposed the use of expected shortfall (ES) for attributing ECC (see, for
example, Kalkbrener et al., 2004).

As a coherent risk measure, ES represents a good alternative both for measuring and allocating
capital. In particular, ES yields additive and diversifying capital allocations. This requires,
however, a modification of the standard interpretation of ECC to act as ‘a buffer for an expected
loss conditional on exceeding a certain quantile’.

As explained in Section III.0.4, marginal contributions require the computation of a derivative of


the risk measure (see equation (III.0.16)). The general theory behind the definition and
computation of these derivatives in terms of quantile measures (e.g. VaR, ES) has recently been
developed (see Gouriéroux et al., 2000; Tasche, 2000, 2002). Several semi-analytical approaches
have also recently been proposed for VaR or ES contributions (e.g. Martin et al.., 2001; Kurth
and Tasche, 2003). While computing the conditional expectations can be challenging when credit
losses are estimated from a Monte Carlo simulation, various methodologies have been devised in
recent years for VaR and ES contributions in simulation models (see Kalkbrener et al., 2004;
Hallerbach, 2003; Mausser and Rosen, 2004). Simulation provides the flexibility to support more
realistic credit models, which include diversification through multiple factors, more flexible co-
dependence structures, multiple asset classes and default models, stochastic (correlated)
modelling of exposures, and loss given default.

III.B.6.7.2 Shortcomings of VaR for ECC and Coherent Risk Measures


In its common interpretation, ECC is a buffer that provides protection against potential credit
losses, at a confidence level that is less than 100% (e.g. 99.9%). The confidence level is consistent
with the desired credit rating of the firm. This leads to measures of capital that reflect a given
quantile of a credit loss distribution.

Given this definition, VaR is an intuitive measure for ECC. However, VaR has several
shortcomings since it is not a coherent risk measure (in the sense of Artzner et al., 1999). In
particular, VaR is not sub-additive in general.

A risk measure Ʊ is said to be sub-additive if


Ʊ( X  Y ) d Ʊ( X )  Ʊ( Y ) (III.B.6.10)

for any two portfolios X and Y. Thus, sub-additivity is a property of risk measures required to
account for portfolio diversification.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 25
The PRM Handbook – III.B.6 Credit Risk Capital

While VaR is always sub-additive for normal loss distributions, in more general cases the total
portfolio VaR might be higher than the sum of stand-alone VaRs. This is particularly relevant to
credit loss distributions, which are far from normal and not smooth, given the discreteness of
individual credit losses

Example III.B.6.4
Consider a simple one-year BBB loan with a notional of $100. The obligor has a PD of 0.5% and
a 50% LGD. Expected losses are $100 × 0.5 × 0.005 = $0.25. However, the 99% VaR is 0 (we
are more than 99% certain that we will not incur a loss). This leads to a 'negative' unexpected loss
and, thus, the stand-alone capital of this position is negative.

Now consider a portfolio that invests $100 equally in 10 one-year loans to different BBB
obligors. Assume obligor defaults are independent and a 50% LGD for all of them. In this case,
EL is still $0.25. There is now also a 95.11% probability of no defaults and a 99.89% chance that
there is at least one defaulted loan. Thus, the 99% VaR is equal to $5 ($10 notional × 50% LGD).
Based on VaR, the total credit capital to support this portfolio is $4.75. However the stand-alone
VaR of each loan is zero (thus the stand-alone capital of each loan remains, in principle, –$0.25).

The theory of coherent risk measures (Artzner et al., 1999) is well developed and has become
popular among academics and practitioners. Coherent risk measures such as expected shortfall
present a good alternative both for measuring and allocating capital. This requires, of course, the
modification of the standard definition and interpretation of capital as a buffer to cover x%
losses, to one where the buffer would cover the ‘expected losses conditional on reaching an x%
loss’.

III.B.6.8 Summary and Conclusions


This chapter reviews the main concepts for estimating and allocating ECC as well as the current
regulatory framework for credit capital. Credit portfolio methodologies are the key tools to
compute ECC from a bottom-up approach. Today, they are used broadly by practitioners for
estimating ECC and managing credit risk at the portfolio level. Furthermore, various institutions
are starting to apply credit portfolio frameworks to compute and manage ECC at the enterprise
level.

Credit portfolio models must be defined and parameterised consistently with the ECC definition
of the firm. This definition includes the time horizon, the type of credit loss (default only or
mark-to-market) and the confidence level (or quantile) of the loss distribution. Since ECC is

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 26
The PRM Handbook – III.B.6 Credit Risk Capital

designed to absorb unexpected losses up to a certain confidence level, it is commonly estimated


by a VaR-type measure (at the defined confidence level) which subtracts expected losses. We
further address some potential shortcomings of VaR for measuring risk as well as for allocating
capital, and discuss other measures such as expected shortfall.

In the past, regulatory credit capital has differed significantly from ECC. However, the new Basel
II Accord for banking regulation has introduced a closer alignment of regulatory credit capital
with current best-practice credit risk management and ECC measurement. While today falling
short of allowing the use of credit portfolio models to estimate regulatory credit capital, Basel II
has introduced various approaches for minimum capital requirements of increased complexity
and alignment with the credit riskiness of an institution. In particular, for the first time, it allows
banks to use internal models for estimating key credit risk components (PDs, exposures and
LGDs). Furthermore, the IRB risk-weight formulae are based on solid credit portfolio modelling
principles. Finally, with its three-pillar foundation, Basel II focuses not only on the computation
of regulatory capital, but also on a holistic approach to managing risk at the enterprise level.

References
Artzner, P, Delbaen, F, Eber, J-M, and Heath, D (1999) Coherent measures of risk, Mathematical
Finance, 9(3), pp. 203–228.

Basel Committee on Banking Supervision (1988) International convergence of capital


measurement and capital standards. Available at http://www.bis.org

Basel Committee on Banking Supervision (1995) Basel capital accord: treatment of potential
exposure for off-balance-sheet items. Available at http://www.bis.org

Basel Committee on Banking Supervision (2000) Range of practice in banks’ internal ratings
systems. Discussion paper, available at http://www.bis.org

Basel Committee on Banking Supervision (2003) The new Basel capital accord: Consultative
document. Available at http://www.bis.org

Basel Committee on Banking Supervision (2004) International convergence of capital


measurement and capital standards: A revised framework. Available at http://www.bis.org

Canabarro, E, Picoult, E, and Wilde, T (2003) Analysing counterparty risk. Risk, September, pp.
117–122.

Gordy, M (2003) A risk-factor model foundation for ratings-based bank capital rules, Journal of
Financial Intermediation, 12(3), pp. 199–232.

Gordy, M, and Jones, D (2003) Random tranches, Risk, March, pp. 78–83.

Gouriéroux, C, Laurent, J-P, and Scaillet, O (2000) Sensitivity analysis of values at risk, Journal of
Empirical Finance, 7(3–4), pp. 225–245.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 27
The PRM Handbook – III.B.6 Credit Risk Capital

Hallerbach, W G (2003) Decomposing portfolio value-at-risk: a general analysis, Journal of Risk,


5(2), pp. 1–18.

Kalkbrener, M, Lotter, H, and Overbeck, L (2004) Sensible and efficient capital allocation for
credit portfolios, Risk, January, pp. S19–S24.

Kurth, A, and Tasche, D (2003) Contributions to credit risk. Risk, March, pp. 84–88.

Kupiec, P (2002) Calibrating your intuition: Capital allocation for market and credit risk. IMF
Working Paper WP/02/99, available at http://www.imf.org

Martin, R, Thompson, K, and Browne, C (2001) VAR: who contributes and how much? Risk,
August, pp 99–102.

Mausser, H, and Rosen, D (2004) Scenario-based risk management tools. In S W Wallace and
W T Ziemba (eds), Applications of Stochastic Programming. Philadelphia: SIAM.

Praschnik, J, Hayt, G, and Principato, A (2001) Calculating the contribution, Risk, 14(10), pp.
S25–S27.

Pykthtin, M, and Dev, A (2002) Credit risk in asset securitisations: an analytical model, Risk, May,
pp. S16–20.

Pykthtin, M, and Dev, A (2003) Coarse-grained CDOs. Risk, January, pp. 113-116.

Risk Management Association (2003) Retail credit economic capital estimation – best practices.
Working Paper, Risk Management Association, available at http://www.rmahq.org

Securities and Exchange Commission (2004) Proposed rule: Alternative net capital requirements
for broker-dealers that are part of consolidated supervised entities. 17 CFR Part 240.
http://www.sec.gov/rules/proposed/34-48690.htm

Tasche, D (2000) Conditional expectation as quantile derivative. Working paper, Technische


Universität München.

Tasche, D (2002) Expected shortfall and beyond. Working paper, Technische Universität
München.

Copyright © 2004 D. Rosen and the Professional Risk Managers’ International Association. 28
The PRM Handbook – III.C.1 The Operational Risk Management Framework

III.C.1 The Operational Risk Management Framework


Michael K. Ong1

In this chapter I provide a brief outline of how to establish an operational risk management
framework within an institution. Operational risk has received a lot of attention recently although
it is not an entirely new field of risk management. Many of the biggest losses in the financial
industry and the corporate arena can be attributed, in one way or another, to operational risk
failures. The chapter begins by highlighting some of the better-known losses in the recent past
and argues why it is important for individual institutions to define what operational risk means to
them. The Basel II proposals, scheduled for promulgation in 2006, have also provided some
guidance (primarily to banking institutions) on the types of operational risk failure and their
associated loss event types. The chapter then discusses the goals and scope of an operational risk
management framework. It outlines the key components of operational risk and presents some
useful tools, e.g., the risk catalogue and risk scorecard, for identifying specific operational risk
failures. Finally, it explains how to make the risk assessment process work through the
involvement of senior management and every business unit within the institution.

III.C.1.1 Introduction
Operational risk management has become increasingly important for financial institutions over
the past several years. The need for a better understanding of operational risk is driven primarily
by two factors, namely, the growing sophistication of financial technology and the rapid
deregulation and globalisation of the financial industry. These factors contribute to the increasing
complexity of banking activities and, therefore, heighten the operational risk profile of the
financial services industry. Over the past few years, a significant number of high-impact and high-
profile losses, some leading to the demise of once revered, well-respected institutions, have
pointed consistently to failure in operational risk management.

This seemingly sudden awareness of operational risk management is quite ironic considering that
operational risk has always been an integral risk associated with doing business. ‘Operational risk
is as old as the banking industry itself’, the rating agency Fitch reports, ‘and yet, the industry has
only recently arrived at a definition of what it is’. The report goes on to say that ‘in its rating
analysis of banks, Fitch will be looking for evidence of a clearly articulated definition of
operational risk, examining the quality of an organization’s structure and operational risk culture,

1 Professor of Finance and Executive Director of the Center for Financial Markets, Stuart Graduate School of Business, Illinois
Institute of Technology. The author wishes to extend his sincerest thanks to the editors, Carol Alexander and Elizabeth Sheedy, for
their careful editing of this chapter.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 1
The PRM Handbook – III.C.1 The Operational Risk Management Framework

the development of its approach to the identification and assessment of key risks, data collection
efforts, and overall approach to operational risk quantification and management’ (Ramadurai et
al., 2004). In addition, Moody’s believes that ‘operational risk management improves the quality
and stability of earnings, thereby enhancing the competitive position of the bank and facilitating
its long-term survival’ (Moody’s Investor’s Service, 2003). Moody’s goes on to comment that:
‘The control of operational risk is fundamentally concerned with good management, which
involves a tenacious process of vigilance and continuous improvement. This is a value-adding
activity that impacts, either directly or indirectly, on bottom-line performance. It must, therefore,
be a key consideration for any business. Since operational risk will affect credit ratings, share
prices, and organisational reputation, analysts will increasingly include it in their assessment of the
management, their strategy and the expected long-term performance of the business.’ Thus
rating agencies are now clearly interested in how financial institutions manage their operational
risk. In fact, how institutions manage their operational risks is likely to influence how they will be
rated by the rating agencies.

Against the background of greater complexity and opaqueness in the banking industry due to
technological advancement, the Basel Committee for Banking Supervision (2003) cites the
emergence of new forms of risk that require attention immediately:

‘Developing banking practices suggest that risks other than credit, interest rate and market risk
can be substantial. Examples of these new and growing risks faced by banks include:

x If not properly controlled, the greater use of more highly automated technology has the
potential to transform risks from manual processing errors to system failure risks, as
greater reliance is placed on globally integrated systems;
x Growth of e-commerce brings with it potential risks (e.g., internal and external fraud and
system security issues) that are not yet fully understood;
x Large-scale acquisitions, mergers, de-mergers and consolidations test the viability of new
or newly integrated systems;
x The emergence of banks acting as large-volume service providers creates the need for
continual maintenance of high-grade internal controls and back-up systems;
x Banks may engage in risk mitigation techniques (e.g., collateral, credit derivatives, netting
arrangements and asset securitisations) to optimise their exposure to market risk and
credit risk, but which in turn may produce other forms of risk (e.g. legal risk); and

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 2
The PRM Handbook – III.C.1 The Operational Risk Management Framework

x Growing use of outsourcing arrangements and the participation in clearing and


settlement systems can mitigate some risks but can also present significant other risks to
banks.

The diverse set of risks listed above can be grouped under the heading of “operations risk”.

The emergence of the types of risks listed above by the Basel Committee forms the basis for
regulatory pressure currently felt by many major financial institutions. The Financial Services
Authority (2003) reported that, even as late as in mid-2003, the financial industry was still in the
early stages of developing operational risk frameworks. In its survey, the FSA reported that ‘a
majority of firms stated that their primary motivation for developing the operational framework
was increased regulatory focus, with regulation a more significant driver in smaller firms than in
major financial groups’.

Should the impetus for developing a sound operational risk management framework be driven
primarily by emerging concerns raised by rating agencies or the threat of greater scrutiny by
regulatory authorities? In fact, should operational risk attain the limelight it is currently basking
under because of the impending capital charge being deliberated in Basel II? I think not. These
motivations in isolation are unlikely to lead to successful implementation of an operational risk
management function. A coherent, sound and successful operational risk management framework
can only come from an internal realisation and desire amongst senior management that this is a
value-adding activity that ultimately impacts the bottom line of the institution.

III.C.1.2 Evidence of Operational Failures


Table III.C.1.1 lists some of the largest derivatives losses on the Street during the 1990s. In all
cases the losses are attributable, at least in part, to operational risk. Losses resulted from flaws in
the risk management framework of the institutions concerned.

One of the most dramatic and well-documented derivatives losses was the collapse in 1995 of
Barings, Britain’s oldest merchant bank (200 years!). One person (Nick Leeson), based in
Singapore (several thousand miles away from corporate headquarters), managed to circumvent
internal systems over an extended period of time to hatch and hide his trading schemes. It
ultimately resulted in over $1.2bn of losses. This all pointed to senior management’s abject failure
to institute proper managerial, financial and operational control over the institution. Since the
bank’s risk management and control functions were very weak, the system of checks and balances
failed at several operational and managerial junctures and in more than one location where the
bank operated. The Barings debacle is not the story of just one single solitary rogue trader, but

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 3
The PRM Handbook – III.C.1 The Operational Risk Management Framework

rather the breakdown of an entire organisation that had failed to exercise sufficient oversight and
control of its people at all levels, its lack of clear directions and accountability for the processes within
the bank, and the failure of technology to detect trading and booking anomalies for an extended
period of time.

Yet all of these derivatives losses combined pale in comparison to the S&L crisis ($150bn) of the
1980s, the ‘non-performing’ real estate loans of Japanese banks ($500bn) in the early 1990s, the
Credit Lyonnais bankruptcy ($24bn) due to bad debt in 1996, and the more recent asset
management frauds at Deutsche Morgan Grenfell, Jardine Fleming, etc.2

The early 2000s witnessed the multi-billion dollar collapse of Enron, WorldCom, Tyco, Parmalat,
and many other fallen angels which, in more ways than one, brought about a sense of urgency for
better corporate governance. Corporate governance in essence calls for greater accountability of
senior management in an effort to combat corporate fraud. And corporate fraud is one common
instance of operational risk failure.

Table III.C.1.1: Publicly disclosed derivatives losses in the 1990s

Company/Entity Loss Amount Area of loss


($m)
Air Products 113 Leverage & currency swaps
Askin Securities 600 Mortgage-backed securities
Baring Brothers 1240.5 Options
Cargill (Minnetonka Fund) 100 Mortgage derivatives
Codelco Chile 200 Copper & precious metals futures and
forwards
Glaxo Holdings PLC 150 Mortgage derivatives
Long Term Capital Management 4000 Currency & interest rate derivatives
Metallgesellschaft 1340 Energy derivatives
Orange County 2000 Reverse repurchase agreements & leveraged
structured notes
Proctor & Gamble 157 Leveraged German marks  US dollars
spread
Source: Exhibit 1 in McCarthy (2000), taken from Brian Kettel, Derivatives: Valuable Tool or Wild Beast?
Copyright © 1999 by Global Treasury News (www.gtnews.com).

2Details of other financial scandals can be found at http://www.ex.ac.uk/~rdavies/arian/scandals/.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 4
The PRM Handbook – III.C.1 The Operational Risk Management Framework

Much more recently, improper trading in mutual funds has cost banks and some funds
management companies millions of dollars in fines. For instance, in mid-March of 2004 Bank of
America Corp. and its merger partner, FleetBoston Financial Corporation, agreed to pay a
collective sum of $675m to settle charges with securities regulators that they had defrauded
shareholders by allowing select investors to trade improperly in their mutual funds. The Boston
Globe reported on 16 March 2004: ‘With this agreement, mutual fund firms have now reached
settlements totalling $1.65 billion, eclipsing the $1.4 billion Wall Street firms agreed to pay [in
2003] to settle charges their analysts issued biased research to win investment banking business,
New York Attorney General Eliot Spitzer said.’

III.C.1.3 Defining Operational Risk


What is operational risk? This depends on what an institution wishes to gain from its operational
risk management function. No two institutions will have exactly the same definition of what
operational risk means since there are unique facets such as composition of the business
portfolio, internal culture, risk appetite, etc., that differentiate the types of operational risks the
institutions are exposed to. Nevertheless, there are some very clear commonalities that are shared
by different financial institutions.

There are many highfalutin and facetious ways to define operational risk. The most important
element to take into account, however, is to choose a definition that is in line with the
institution’s philosophy and sound management culture of taking proactive stances in managing the
risks of the enterprise.

One of the earliest definitions in the financial industry broadly defines operational risk in financial
institutions as the ‘risk that external events, or deficiencies in internal controls or information
systems, will result in an economic loss – whether the loss is anticipated to some extent or
entirely unexpected’. There are two obvious observations here. This early industry definition
identifies both the expected and unexpected losses attributable to operational mishaps. In simple
terms, expected losses are those losses incurred during the natural course of doing business, and
unexpected losses are usually associated with big surprises resulting from lapses in management
and breakdown in controls. The scope of operational risk in this early definition is quite broad
and extends to all facets and aspects of risk associated with both internal and external events,
tangible resources such as information technology and systems, and intangibles such as people
and process.

This early industry definition eventually became the cornerstone of the official definition from
Basel II. The first Basel definition of operational risk was simply ‘the risk of direct or indirect loss

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 5
The PRM Handbook – III.C.1 The Operational Risk Management Framework

resulting from inadequate or failed internal processes, people and systems or external events’.
This definition includes legal risk, but Basel II explicitly excluded strategic and reputational risk.
These exclusions are very important aspects of the daily operation of any financial institution, but
are admittedly much more difficult to assess and manage.

Concerns were expressed about the exact meaning of direct and indirect loss. Consequently the
current Basel II definition drops this distinction but provides clear guidance on which losses are
relevant for regulatory capital purposes. This is achieved by defining the types of loss events that
should be recorded in internal loss data. In its September 2001 press release for the ‘Working
Paper on the Regulatory Treatment of Operational Risk’, the Risk Management Group (RMG) of
the Basel Committee on Banking Supervision defined operational risk as ‘the risk of loss resulting
from inadequate or failed internal processes, people and systems or from external events’.

According to the RMG press release this is a ‘causal-based’ definition: ‘It is important to note that
this definition is based on the underlying causes of operational risk. It seeks to identify why a loss
happened and at the broadest level includes the breakdown by four causes: people, processes,
systems and external factors. This “causal-based” definition, and more detailed specifications of
it, is particularly useful for the discipline of managing operational risk within institutions.
However, for the purpose of operational risk loss quantification and the pooling of loss data
across banks, it is necessary to rely on definitions that are readily measurable and comparable.
Given the current state of industry practice, this has led banks and supervisors to move towards
the distinction between operational risk causes, actual measurable events (which may be due to a
number of causes, many of which may not be fully understood), and the P&L effects (costs) of
those events. Operational risk can be analysed at each of these levels.’ The Basel II definition is
primarily for capital adequacy purposes. That is, a key output of the regulatory framework is a
measure of the amount of capital required by a financial institution as a buffer against unexpected
operational risks.

III.C.1.4 Types of Operational Risk


The types of operational risk encountered daily within an institution are quite diverse and
plentiful. What are the main types of operational risk financial institutions need to be wary of?
The RMG struggled with this very same issue when it embarked on its event-by-event loss data
collection exercise in June 2002.3 In its press release, the Basel Committee on Banking
Supervision (2002) described the goals of this exercise: ‘The primary purpose of this survey is to

3 Having had two previous quantitative impact studies (QIS) in the previous years, the RMG decided that this more
recent loss data collection exercise would specifically concentrate on very granular loss data.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 6
The PRM Handbook – III.C.1 The Operational Risk Management Framework

collect granular (event-by-event) operational risk loss data to help the Committee determine the
appropriate form and structure of the AMA (Advanced Measurement Approach). To facilitate the
collection of comparable loss data at both the granular and aggregate levels across banks, the
Committee is again using its detailed framework for classifying losses. In the framework, losses
are classified in terms of a matrix comprising eight standard business lines and seven loss event
categories. These seven event categories are then further divided into 20 sub-categories and the
Committee would like to receive data on individual loss events classified at this second level of
detail if available.’ The eight standard business lines are: corporate finance; trading and sales; retail
banking; commercial banking; payment and settlement; agency services; asset management; and
retail brokerage.

Table III.C.1.2: Basel II definition of business lines

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 7
The PRM Handbook – III.C.1 The Operational Risk Management Framework

In Table III.C.1.2 we see that investment banking as a primary business unit is further split up
into two level 1 sub-units: corporate finance; and trading and sales. Furthermore, within the
trading and sales sub-unit, there are further level 2 sub-delineations: sales; market making;
proprietary positions; and treasury activities. Each of these sub-units is classified based on its
respective business functions, such as fixed income, foreign exchange, equity, commodities,
credit, funding, brokerage, and so on.

The Committee proposes looking at seven loss event categories associated with each business
unit. These are depicted in Table III.C.1.3. The level 1 event types are the practical and obvious
events: internal fraud; external fraud; employment practices and workplace safety; clients,
products and business practices; damage to physical assets; business disruption and system
failures; and execution, delivery and process management. Furthermore, within the level 1 event
type category, there are at least two level 2 sub-categories, 20 sub-categories in total. Each of
these sub-categories is again classified based on the associated business activities.

Table III.C.1.3: Basel II definition of loss event types

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 8
The PRM Handbook – III.C.1 The Operational Risk Management Framework

Table III.C.1.3 (Continued): Basel II definition of loss event types

For internal purposes it is very important to establish suitable definitions of the different types of
operational risk that are relevant for each individual institution. Once suitable definitions of event
types are decided upon, the institution can proceed to establish a structure for the operational risk
management framework. It is interesting to note that almost all of the so-called internationally
active banks that are the primary focus of Basel II have their own unique definitions of
operational risk event types that are tailored to their particular businesses, corporate culture and
risk appetite. For example, one internationally active bank defines operational risk as: ‘The risk of
inadequate identification of and/or response to shortcomings in organizational structure, systems,
transaction processing, external threats, internal controls, security measures and/or human error,
negatively affecting the bank’s ability to realize its objectives.’ A smaller domestic bank defines
operational risk simply as: ‘The risk associated with the potential for systems failure in a given
market’.

III.C.1.5 Aims and Scope of Operational Risk Management


The fundamental goal of operational risk management should be risk prevention. The assessment
(meaning the quantitative measurement) of operational risk is of secondary importance.

Because complete elimination of operational risk failures is not feasible, our operational risk
management framework must aim to minimise the potential for loss – through whatever means
possible. Indeed, the risk management of the entire institution as an enterprise should focus more
on the operational aspects of the different business activities – with the important provision that

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 9
The PRM Handbook – III.C.1 The Operational Risk Management Framework

the other key risk management functions within the enterprise (e.g., market risk, credit risk, audit,
and compliance) are already firmly grounded.

Regardless of how an institution chooses to define operational risk, it is vitally important at the
outset to explicitly articulate what its target objectives and key concerns are. I suggest the
following important objectives when establishing an operational risk management function:
1. To formally and explicitly define and explain what the words ‘operational risk’ mean to the
institution.
2. To avoid potential catastrophic losses.
3. To enable the institution to anticipate all kinds of risks more effectively, thereby
preventing failures from happening.
4. To generate a broader understanding of enterprise-wide operational risk issues at all
levels and business units of the institution – in addition to the more commonly monitored
credit risk and market risk.
5. To make the institution less vulnerable to such breakdowns in internal controls and
corporate governance as fraud, error, or failure to perform in a timely manner which
could cause the interests of the institution to be unduly compromised.
6. To identify problem areas in the institution before they become critical.
7. To prevent operational mishaps from occurring.
8. To establish clarity of people’s roles, responsibilities and accountability.
9. To strengthen management oversight at all levels.
10. To identify business units in the institution with high volumes, high turnover (i.e.,
transactions per unit time), high degree of structural change, and highly complex support
systems. Such business units are especially susceptible to operational risk.
11. To empower business units with the responsibility and accountability of the business risks
they assume on a daily basis.
12. To provide objective measurements of performance for operational risk management.
13. To monitor the danger signs of both income and expense volatilities.
14. To effect a change of behaviour within the institution and to enhance the culture of
control and compliance within the enterprise.
15. To ensure that there is compliance to all risk policies of the institution and to modify risk
policies where appropriate.
16. To provide objective information so that all services offered by the institution take
account of operational risks.
17. To ensure that there is a clear, orderly and concise measure of due diligence on all risk-
taking and non-risk-taking activities of the institution.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 10
The PRM Handbook – III.C.1 The Operational Risk Management Framework

18. To provide the executive committee4 regularly with a concise ‘state of the enterprise’
report for strategic and planning purposes.

The stated objectives tacitly assume that the institution already has in place robust credit risk and
market risk management functions, supported by audit, compliance and risk control oversight.
Note that the operational risk management objectives delineated above should apply to all
business units, including those responsible for market risk and credit risk management.

In practice, the scope of an operational risk management function within an institution should
aim to encompass virtually any aspect of the business process undertaken by the enterprise. The
scope must transcend those business activities that are traditionally most susceptible to
‘operations risk’ – that is, those activities with high volume, high turnover, and highly complex
support systems, e.g. trading units, back office, and payment systems. It is true that the business
activities sharing these characteristics have the greatest exposure to operational risk failures.
Nevertheless, other business activities could potentially sustain economic losses of similar
magnitude.

Table III.C.1.4: Two broad categories of operational risk


Operational strategic risk (‘external’) Operational failure risk (‘internal’)
Defined as the risk of choosing an inappropriate Defined as the risk encountered in the pursuit of a
strategy in response to external factors such as: particular chosen strategy due to:

x political x people
x taxation x process
x regulation x technology
x societal x others
x competition

Source: adapted from Crouhy et al. (1998)

Should operational risk management encompass external events? The goal of operational risk
management must be to focus on internal processes (as opposed to external events) since only
internal processes are within the control of the firm. The firm’s response to external events is,
however, a valid concern for operational risk management. Hence we examine operational risk
from two interrelated perspectives. Table III.C.1.4 distinguishes between operational ‘strategic’
risk (i.e., a flawed internal response to external stimuli) and operational ‘failure’ risk. Failure to

4 In this context, the Executive Committee, composed only of very senior members of management, is assumed to be
the highest governing body of the institution.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 11
The PRM Handbook – III.C.1 The Operational Risk Management Framework

comply with externally dictated strategic risk factors – such as changes in tax laws5 and new
derivatives accounting treatment (e.g., FASB 133) – ultimately translates to an internal operational
risk failure. Once senior management issues the call to action in response to an external stimulus,
there must be no internal breakdown in the people, processes and technologies supporting the
strategic call to action.

III.C.1.6 Key Components of Operational Risk


In view of the very wide scope of an institution’s operational risk management function, we
might want to concentrate on some key components of operational risk, such as the following.

(i) Core operational capability


Risks to the institution’s core operational capability include the risk of premises, people or
systems becoming unavailable due to: natural disasters, fire, bombs or technical glitches; loss of
utilities such as power, water or transportation; employee disputes such as strikes; loss of key
operational personnel; and the loss or inadequacy of systems capabilities due to computer viruses
or Y2K issues. All of the aforementioned events seriously disrupt the institution’s core
competency in supporting its long-term and stable operations, thereby representing a considerable
exposure in terms of their possible impact on the institution’s future earnings and credibility. The
good news is, for the most part, that many of the risks mentioned above are largely insurable at
some cost to the institution. Insurance is, therefore, a useful risk mitigant for these kinds of
failure risk.6

(ii) People7
An institution’s most important assets are its good people. Unfortunately, people also contribute
a myriad of problems through: human error; fraud,8 lack of honesty9 and integrity; lack of
cooperation and teamwork; in-fighting, jealousies, and rumour-mongering; personal sabotage;

5 For example, an institution operating in another country might encounter some unexpected tax liability due to an
unforeseen change in local taxation rules that was not anticipated by the accounting department. This is definitely an
operational risk item that could lead to large unanticipated fines and tax liabilities.
6 Basel II currently does not recognise insurance as a risk mitigant, except in some restricted cases within the
Advanced Measurement Approach. This is somewhat odd considering the fact that banks have routinely used
insurance as a risk management tool.
7 Unfortunately, the category of people is by far the largest cause of operational risk failures.
8 An FDIC study found that fraud was the main contributing factor to 25% of 92 bank failures in the period 1960–77.
The proportion rises to 83.9% if one includes ‘insider fraud’ – i.e., improper lending to individuals or groups
connected with the bank. A review by the Bank of England suggested that, in the UK in the period 1984–96,
fraudulent concealment was a major contributory factor in 7 out of 22 cases of bank problems. In many of these
cases, these bank frauds were perpetrated by senior managers of the banks themselves.
9 On the subject of integrity and honesty, ‘rogue’ trading generally surfaces as the most obvious case of dishonesty and
fraud; however, we need to recognise that the problem of fraud also extends to teller theft, mailroom theft, illegal
funds transfers, and ‘insider’ fraud mentioned in footnote 8, etc. Some well-known cases of securities fraud and rogue
trading are: Kidder-Peabody (Jett, $340m, 1994), Orange County (Citron, $1.6bn, 1994), Barings (Leeson, $1.2bn,

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 12
The PRM Handbook – III.C.1 The Operational Risk Management Framework

office politics;10 lack of segregation and risk of collaboration; lack of professionalism and
customer focus; over-reliance on key individuals, insufficient skills, training and education;
insubordination; employment disputes; poor management and supervision; and a lack of culture
of control, discipline and compliance.11

(iii) Client relationships


An institution derives much of its value from its reputation and the services it provides to its
client base. Any damage to an institution’s reputation has the potential to disrupt revenue flow.
From an operational risk perspective, the institution needs to assess how disreputable activities
might harm its client relationships. Examples include: money laundering; Nazi gold;12 improper
client suitability and lack of disclosure;13 false valuations of client assets to mislead or conceal
losses; collusional relationships with broker-dealers; cosy association with highly-leveraged
institutions; and dishonest practices amidst competition that can harm the institution’s reputation.

(iv) Transactional and booking systems


Operational risk failures are no longer limited to settlement risk in the trading accounts or back
office. More recently, with advances in automation, transactional issues also include: data capture
and processing; deal confirmation14 and contractual documentation, e.g., ISDA master
agreements; collateral management; and general processing and payment/settlement errors which
not only disrupt the flow of business but also put an institution at risk of litigation. In addition,
corporate banking activities contributing to operational failures may include: correspondent banking;
payment services; treasury services; private trust and executor services; structured finance;
custody; and leasing. From a retail banking perspective, there is even more room for potential
operational failures associated with such retail banking activities as: mortgage servicing; funds
management; deposit taking; sending; foreign exchange; custody; credit cards; ATMs; private
banking; and insurance. The transactional processes associated with handling retail customers are
many: payments; cheque clearing; cash handling and teller errors; credit analysis; account opening;
documentation; mortgage applications; interest charges; processing credit/debit card transactions;
processing insurance claims; and payroll processes. In addition, associated with each of these

1995), Daiwa (Iguchi, $1.1bn, 1995), Sumitomo (Hamanaka, $1.8bn, 1995) and many other cases involving varying
amounts of losses.
10 Ask yourself this question: how many institution-wide problems and inefficiencies were caused by internal fights
and office politics?
11 Rogue trading is not the only contributor to well-publicised derivatives losses. Table III.C.1.1 is a list of the largest
derivatives losses attributable largely to failures in risk management where operational risk played a major role.
12 This has become an important reputation risk issue among a few big European banks in the recent past.
13 Among the most highly publicised client suitability and disclosure cases are the Gibson Greetings and Procter &
Gamble lawsuits against Banker’s Trust (1994) and the Orange County collective legal debacle with Merrill Lynch,
Morgan Stanley Dean Witter, and Nomura Securities (1994).
14 The best-known documented case of booking errors occurred at Salomon Brothers in the mid-1990s where a back-
office confirmation for a trade was erroneously booked several orders of magnitude larger than the intended trade.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 13
The PRM Handbook – III.C.1 The Operational Risk Management Framework

business processes is heavy reliance on a sound and stable systems infrastructure within the
institution.

(v) Reconciliation and accounting


Of course the reconciliation of transactions at different levels of institutional activities is
important from a bookkeeping perspective. Another important aspect of reconciliation and
accounting is that it enables the institution to identify areas of inefficient capital allocation. More
broadly, a finance or accounting department, in a quagmire of bureaucracy and mere paper-
pushing, can do the institution a lot of harm by failing to provide senior management with a
precise picture of the state of the institution’s finances. The inability to reconcile properly the
revenue-generating activities with the general ledger per se inhibits the institution from strategically
assessing the performances of its business units and their growth potential.15 In addition, it also
undermines the institution’s ability properly to allocate its scarce resources, such as capital.
Finally, the inability of the finance, legal or accounting departments fully to assess the
implications of regulatory changes puts the institution at a serious disadvantage with regard to tax
shelters and favourable legal treatment of the institution’s assets and liabilities.

(vi) Change and new activities


To stagnate is to fall behind. But, in responding to rapid industry developments, the institution
should not stumble and fall when initiating new business activities. For example, the introduction
of the euro or the change in regulatory accounting rules, e.g., FAS 133,16 requires the institution
to be more vigilant in implementing new technology, re-engineering its processes, expanding its
staff, accepting new clients, launching new products or entering into new markets. More recently,
we have new regulatory directives for anti-money laundering and terrorist funding, the Sarbanes-
Oxley Act of 2002 which requires all listed institutions to enforce more effective corporate
governance and more effective financial statements reporting, and many other new directives
concerning securities fraud promoted by the Securities and Exchange Commission. Inability to
adapt to change may damage an institution’s reputation or disrupt the continuity of its old
businesses.17

15 Ask yourself this question: through our current finance and accounting systems, do we know for sure where we are
generating the greatest revenue for the least amount of risks that we take?
16 Ask yourself this question: how quickly can the institution conform to the FAS 133 directives without unduly taxing
its resources and disrupting its day-to-day business operations?
17 Ask yourself this question: can the institution leverage its current resources and technology and enter into a new
market without unduly incurring additional expenses? If the answer is negative, there is operational risk in the headline
risk categories involving people, processes and technology.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 14
The PRM Handbook – III.C.1 The Operational Risk Management Framework

(vii) Expense and revenue volatility


A sure sign of operational failures associated with management control is a rapid increase in
expenses and significant deviations from budget. Expense increases (including bonuses, salaries,
and systems infrastructure spending) without adequate return signals a potential breakdown and
inefficiencies in people, process and technology. Rapid increase in expenses is not necessarily a
desirable symbol of growth but, in my years of observation, it is also a sure sign of lax accounting
and a complacent management on the verge of going out of control.18 It is normally associated
with a bank’s undesirable corporate culture of wanton waste and lack of accountability. On a
related matter, excessive revenue volatility is the result of at least two related factors: the inability
to respond properly to external market conditions; and the failure to control the operational cost
base. Each of these key factors has its obvious attendant operational failures in people, process and
technology in responding to external risk factors. I need not elaborate further on this.

III.C.1.7 Supervisory Guidance on Operational Risk


The Basel Committee on Banking Supervision has provided preliminary supervisory guidelines
for the management and supervision of operational risk. The guidelines are intended to serve as
best or sound practices within the financial industry. The Committee ‘recognises that the exact
approach for operational risk management chosen by an individual bank will depend on a range
of factors, including its size and sophistication and the nature and complexity of its activities.
However, despite these differences, clear strategies and oversight by the board of directors and
senior management, a strong operational risk culture and internal control culture (including,
among other things, clear lines of responsibility and segregation of duties), effective internal
reporting, and contingency planning are all crucial elements of an effective operational risk
management framework for banks of any size and scope. The Committee therefore believes that
the principles outlined in this paper establish sound practices relevant to all banks’ (Basel
Committee on Banking Supervision, 2003).

The Committee also recognizes that ‘internal operational risk culture is taken to mean the combined
set of individual and corporate values, attitudes, competencies and behaviours that determine a
firm’s commitment to and style of operational risk management.’ Recognising the different nature
of operational risk in different institutions, the Committee defines the management of operational
risk to mean the ‘identification, assessment, monitoring and control/mitigation’ of risk. To this
end, the sound practice paper of February 2003 is structured around ten basic principles grouped
into four main themes:

18 Ask yourself this question: in the past three years, did the increase in expenditure result in a greater market share or
revenue to the institution? If the answer is negative, there is operational risk involving inefficiency and misallocation
of precious resources.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 15
The PRM Handbook – III.C.1 The Operational Risk Management Framework

x Developing an appropriate risk management environment.


x Risk management: identification, assessment, monitoring, and mitigation/control.
x Role of supervisors.
x Role of disclosure.

By imposing the ultimate responsibility on senior management via the board of directors, the first
theme emphasizes the importance of cultivating a risk-awareness culture within the institution as
dictated directly from the highest level of the organization – the board of directors. The second
theme maintains that the management of risk is comprised of four important complementary
activities. The first concerns itself with surveillance and identification of risk within the
institution, followed by a thorough assessment and monitoring of events as they unfold, and
finally, devising mechanisms to control and mitigate these risks, even before they occur. As stated
earlier, prevention is key. Recognising that the safety and soundness of the financial system is a
collaborative effort, the third theme emphasises the important role regulatory supervisors play in
the risk management process of financial institutions. Finally, risk management can only be
facilitated properly if the process is clear and transparent at the outset. Public disclosure to the
market, therefore, serves as an important binding constraint on the behaviour of financial
institutions as they provide the public with their intermediary functions.

III.C.1.8 Identifying Operational Risk – the Risk Catalogue


How can an institution identify potential operational risks lurking within the organization? In the
recent past, many institutions have been surprised to discover that even the most obvious types
of operational risk are widely prevalent within the organisation, unnecessarily squandering
precious resources and hindering productivity.

The first step in the identification process is to require that each business unit (or operational unit)
shall be assessed using a so-called risk catalogue which adequately identifies all the risk categories
relevant to the specific unit being assessed. It clearly identifies possible operational failures under
three categories: people, process and technology.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 16
The PRM Handbook – III.C.1 The Operational Risk Management Framework

Table III.C.1.5: Risk catalogue for Business Unit A19

People Risk
Incompetency Internal Politics,
Inadequate Head Counts Conflict of Interest,
Key Personnel Lack of Cooperation
Management Collusion and Connivance
Communication Fraud

Process Risk
A. Model Risk
Model or Methodology Error
Pricing or Mark-to-Model Error
Availabil ity of Loss Reserves
Model Complexit y

B. Transaction Risk
Execution Error Capacity Risk
Booking Error Valuati on Risk
Col lateral, Confirmation, Erroneous Disclosure Risk
Matching, and Netting Err or Fraud
Product Complexity

C. Operations Control Risk


Li mit Exceedances
Volume Ri sk
Security Risk
Position Report ing Risk
Profit and Loss Reporting Risk

Technology Risk
Systems Fai lure Programming Error
Network Failure Data Corr uption
Systems Inadequacy Disaster Recovery Risk
Compati bility Risk Systems Age
Supplier/Vendor Risk Systems Support

Consider a simple example: suppose we have determined that Business Unit A, due to its intrinsic
business activities, is subject to some sources of operational risk. We can then check them off
against our generic risk catalogue as shown in Table III.C.1.5. This checklist can then form the
basis for assessing the loss frequency and loss severity of the different event types in the business unit
using the risk scorecard (see next section). We shall also see, in Section III.C.1.10, that the control
process requires the identification of pertinent operational risk failures at two different levels:
independent management oversight and self-assessments by individual business units.

III.C.1.9 The Operational Risk Assessment Process


It is important to keep in mind that an operational risk assessment process without the aim of
proactive management is an exercise in futility. What drives our desire to ‘measure’ must be our
belief that a sound and active risk management structure is in place – or will be in place in the
future. As a precursor to building a sound risk management structure, measurement can be the
tool by which senior managers are convinced that such a structure is needed. It also helps an
institution to identify the key problem areas of operational risk and this helps target the resources
that will be allocated.

19 Risk types shown are for illustration only. In practice, different business units may have different types of risks they
are most concerned with. For instance, there is presumably no model risk in retail banking or in leasing.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 17
The PRM Handbook – III.C.1 The Operational Risk Management Framework

For each business unit, the risk assessment process follows four fundamental steps: (1) inputs to
risk catalogue; (2) risk assessment scorecard; (3) review and validation; and (4) outputs of risk
assessment process.

Step 1: Inputs to Risk Catalogue


Operational risk should be evaluated net of risk mitigants. For example, if the institution has
insurance to cover a potential breakdown, then the degree of risk must be properly adjusted by
the insurance premium paid. To obtain a measure of net operational risk, the required inputs to
the risk catalogue must be able to adequately assess both the frequency of failure occurrences and
the severity of loss given that a failure occurs:

x The assessment for frequency of occurrences may come from both internal and external
reports, such as: audit reports; external audit reports; regulatory reports; management reports;
expense reports; deviation from business plans, operational plans, and budgets, etc.; and
expert opinion and industry ‘best practices’.

x An assessment for severity of loss may come from: management interviews, both pre and post
mortem; variances on budgets; insurance claims; and loss history, whenever possible.20

Step 2: Risk Assessment Scorecard


Using the risk catalogue and the inputs from step 1, each business or operational unit will be
assessed using a risk scorecard. The risk scorecard will appropriately identify and assess the nature
of operational risk based on the following broad points:
x Risk categories – people, process, technology, and external dependencies.
x Connectivity and interdependencies. Because the headline risk categories of people, process and
technology cannot be looked at in isolation, their cumulative effects and interdependencies must
be carefully identified and accounted for.
x Change, complexity, and complacency. The sources that drive the headline risk categories may be
due to: a change in the work environment, e.g., the introduction of new technology to the
business unit; the complexity of products, process or technology; or the complacency factor due
to ineffective management of the unit.

20 A few major banks are beginning to gather their own internal loss experiences – the outcome will not be known
until many years from now. Loss history should also cover credit losses as a result of operational mishaps, loss due to
theft and fraud, and losses strictly due to errors. Admittedly, this is an extremely difficult task and the financial
industry is still struggling with how to collect these loss data. In addition, the RMG has also analysed data collected
from the numerous participating banks through the 2002 Operational Risk Loss Data Collection Exercise (LDCE) in
June of 2002. The 2002 LDCE was an extension and refinement of two previous data collection exercises sponsored

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 18
The PRM Handbook – III.C.1 The Operational Risk Management Framework

x Frequency and severity assessments. Quantifying the likelihood of breakdown in operational


processes is very difficult. It may be simply ‘rated’ as very likely, not likely, very unlikely and
so forth, or a question relating to the expected number of loss events may be posed. Severity
of loss describes the potential monetary loss to the institution, given the occurrence of an
operational failure. Since actual loss history may be difficult to come by, some institutions
subjectively attach a range of loss (e.g., between $5 million to $10 million for certain failures).
More details on the recommendations for frequency and severity of self-assessments are
given in Chapter III.C.3.
x Net operational risk. Operational risks should be evaluated net of risk mitigants. For instance,
the potential monetary amount lost due to certain insurable operational failures can be
reduced through the use of risk mitigants, e.g., insurance and underwriting. We need to find
out which bank activities are currently covered by insurance policies and by how much. In
addition, a catalogue of insurable bank activities needs to be prepared.
x Net risk assessment. The combination of all the ingredients in the risk scorecard enumerated
above gives the overall net risk assessment.

Step 3: Review and Validation


After the risk assessment process is completed (via the risk catalogues) and risk scorecards for
each business unit are produced, it is the responsibility of the operational risk management committee21
to review the assessment results with the management of the respective business unit and other
key officers of the institution. The responsibilities of the committee may include:
x Formulating a set of operational risk policies and guidelines clearly delineating the actions
needed to correct and prevent the operational problems and issues identified.
x Determining the important differences between the unit's own self-assessment and the
independent assessment.
x Opining on the ratings in the risk scorecards before publication.
x In conjunction with audit and compliance departments, issuing a mandatory report and list of
recommendations to the affected business units.
x Issuing summary risk reporting about the enterprise to the executive committee.

Step 4: Outputs of Risk Assessment Process


There are several possible outputs from the operational risk assessment process. We concentrate
on only three broad items.

by the RMG, which primarily ‘focused on banks’ internal capital allocations for operational risk and their overall
operation risk loss experience during the period from 1998 to 2000’.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 19
The PRM Handbook – III.C.1 The Operational Risk Management Framework

(i) Improved Risk Reporting and Analysis:


As an ongoing goal of the operational risk assessment framework, the institution should
endeavour to streamline its risk reporting processes among the different risk-monitoring units of
the institution (i.e., audit, compliance, and risk control). These reports should be viewed as a
concise summary of specific audit and compliance reports which are already instituted within the
financial institution. The most useful of these reporting tools are the risk catalogue, risk
scorecards and ‘heat maps’ which are used to highlight the relative information on operational
risk exposures across the institution. An example of a heat map is given in Figure III.C.1.6. This
shows that Business Unit D, relative to all the other business units in the institution, has a
moderately high likelihood of incurring a large amount of loss due to operational risk failures.
This means that it requires a relatively large amount of economic capital to sustain its business
activities.

Figure III.C.1.6: Example of a heat map

This quadrant requires


urgent attention
VH
Business Unit E
Business Unit D
H
Likelihood

M
Business Unit C Business Unit B
L

VL Business Unit A

5 10 15 20 25 30 35 40
Severity of Loss ($MM)

21 In many financial institutions, the operational risk management committee is a committee within either the risk
management function or the audit function.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 20
The PRM Handbook – III.C.1 The Operational Risk Management Framework

(ii) OCC Exam Chart


The Office of the Comptroller of the Currency (OCC), in its September 1995 press release,
highlighted its examination procedure of banks to cover nine principal categories of risk. They
are: credit risk; interest-rate risk; liquidity risk; price risk; foreign exchange risk; transaction risk;
compliance risk; strategic risk; and reputation risk. (The OCC mandate is directed at banks with
financial derivatives activities, therefore the nine categories of risk need not apply to every
business units of the financial institution.) Using the risk scorecard and other reports, we can
graphically represent the evaluation of a business unit in a concise manner in line with the OCC
examination procedure. This is illustrated by the OCC exam chart in Figure III.C.1.7.

Figure III.C.1.7: OCC exam chart

Inherent Business Risk for Business Unit C

High
Extensive

Average
Average

Low
Minimal
Transaction

Strategic
Credit

Liquidity

Price

Exchange
Foreign

Compliance
Interest Rate

Reputation

Inherent Business Risk


Adequacy of
Management Controls

(iii) Capital Attribution


By attributing economic capital to operational risks we can ensure that business units which are
more prone to operational failures are assigned a greater allocation of capital commensurate to
the risks that they take. Admittedly, this is a difficult and subjective task and a whole chapter of
this handbook is devoted to it (see Chapter III.0). It is also important to note that the current
Basel II proposals call for a regulatory capital charge for operational risk, in addition to the capital
charge already required for both market risk and credit risk (see Chapter III.C.3). Whether there
is wisdom behind a capital charge for operational risk remains to be seen (Ong, 1998). After all,
major financial catastrophes resulting from breakdowns in people, process and technology cannot
be prevented entirely. Thus, operational risk events cannot be eliminated altogether, regardless of
the amount of capital charge levied on operational risk.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 21
The PRM Handbook – III.C.1 The Operational Risk Management Framework

III.C.1.10 The Operational Risk Control Process


With hindsight the well-publicised derivatives losses listed in Section III.C.1.2 were all preventable.
Outside of derivatives activities, losses attributable to fraud in other lines of business are likewise
preventable. Subsequently, countless studies have continued to point to the following failures in
risk control as the ultimate culprits.
(i) Lax management structure: lack of adequate management oversight and accountability;
no segregation of duties; too many delays in systems development; disrespect for
audit reports; and ignorance.
(ii) Inadequate assessment of risk: on and off-balance sheet activities: lack of stress-testing for
unexpected market moves; inappropriate setting of limits; too much risk relative to
capital; and lack of risk-adjusted return measurement.
(iii) Lack of transparency: inaccurate information on capital, solvency and liquidity;
inadequate accounting policies; lack of benchmarks and comparability; lack of
approvals, verifications and reconciliations; and lack of review of operating
performance.
(iv) Inadequate communication of information between levels of management: lack of escalation
process in times of crises; paying too much ‘lip service’ in the various risk
committees; lack of procedures for monitoring and correcting deficiencies; and poor
communication between different risk-monitoring groups.
(v) Inadequate or ineffective audit and compliance programmes.

To help facilitate the identification of operational risk failures within the institution by business
unit, it is important that the operational risk management function is streamlined alongside the
risk control, compliance, and audit functions of the institution. Lessons that we have learned
from many highly publicised financial fiascos all point to the need for the following:

(i) Independent management oversight. This includes audit oversight, risk control and
compliance functions, and most definitely senior management involvement. Each
overseer plays the role of independent ‘risk monitor’, considering such operational
performance measures as volume, turnover, settlement failures, delays, errors,
compliance to market and credit limits, income and expense volatility, effectiveness
of line management, accounting anomalies, and other higher-level controls. For
many business activities of the holding company, this information is already available
within the institution.

(ii) Self-assessments by individual business units. Line management has the best knowledge of
its own people, the day-to-day processes it has to go through, the integrity of the

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 22
The PRM Handbook – III.C.1 The Operational Risk Management Framework

systems supporting the business unit, and the external circumstances that could
cause its people, process and technology to fail. A self-assessment by the individual
business units is, therefore, a key first step in the operational risk assessment process.

III.C.1.11 Some Final Thoughts


Operational risk failures can wreak havoc within an organization if not properly identified,
assessed, monitored, controlled and mitigated. Furthermore, if not sufficiently contained,
operational risk also has a tendency to spill over and cause systemic risk to the broader markets.
In spite of its importance for containing operational risk, it is still more art than science.
Operational risk management continues to be one of the least developed areas of enterprise risk
management in spite of the heightened attention it has received. Perhaps since operational risk is
fundamentally qualitative in nature, it might never be as developed as other risk areas.

While much progress has been made over the past several years, at least primarily within the
financial industry, the fundamental operational risk management framework continues to be
confused by many people. Many people with quantitative background have used the Basel II
proposals as their impetus for furthering the argument for more operational risk modelling,
People with compliance, audit and risk control backgrounds tend to interpret the Basel II
guidance as an opportunity to codify additional policies, thereby reducing operational risk
management to a mere set of rules and regulations. In 2006 there will be a new regulatory capital
charge for operational risk, but no regulators in their right mind would think that operational risk
management is about levying capital charges. While capital charges are important, they are only
one small component of prudent risk management, and they are not a good substitute for sound
judgement. ‘An increase in capital will not itself reduce risk; only management action can achieve
that‘, a Moody’s Special Comment reported (Moody’s Investor’s Service, 2003).

My personal experience as head of enterprise risk management and chief risk officer for two of
the ten largest banks in the world has taught me that most operational risk failures are
preventable. The processes outlined in this chapter are based on my experience of how to prevent
operational risk mishaps. Experience tells me that the most important aspect of the operational
risk management framework is still sound corporate governance and proactive senior
management involvement. After all, the control of operational risk is concerned fundamentally
with good management. In practice, good management means vigilance, patience, and persistence
in improving the risk management process. And this is what operational risk management is all
about.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 23
The PRM Handbook – III.C.1 The Operational Risk Management Framework

References
Basel Committee on Banking Supervision (2002) Operational Risk Loss Data Collection Exercise –
2002, 4 June.

Basel Committee on Banking Supervision (2003) Sound Practices for the Management and Supervision of
Operational Risk, February.

Crouhy, M, Galai, D and Mark, R (1998) Key steps in building consistent operational risk
measurement and management. In Operational Risk and Financial Institutions. London: Risk Books.

Financial Services Authority (2003) Building a Framework for Operational Risk Management: The FSA’s
Observations, July.

McCarthy, E. (2000) Derivatives revisited. Journal of Accountancy, 189(5). See


http://www.aicpa.org/pubs/jofa/may2000/mccarthy.htm

Moody’s Investors Service (2003) Moody’s Analytical Framework for Operational Risk Management of
Banks. Special Comment, January.

Ong, M (1998) On the quantification of operational risk – a short polemic. In Operational Risk and
Financial Institutions. London: Risk Books.

Ramadurai, K., Olseon, K, Andrews, D, Scott, G., and Beck, T. (2004) The Oldest Tale but the
Newest Story: Operational Risk. Special Report, FitchRatings, January.

Copyright © 2004 Michael Ong and the Professional Risk Manager’s International Association 24
The PRM Handbook – III.C.2 Operational Risk Process Models

III.C.2 Operational Risk Process Models


James Lam1

III.C.2.1 Introduction
Management and board attention to operational risk management (ORM) has never been greater.
While businesses have always faced operational risks, the discipline of ORM is still in the early
stages of development. The focus on operational risk has been driven by a number of important
factors:

x Corporate disasters. The need for ORM first gained the attention of risk management
professionals in the 1990s when they realized that the root causes underlying the major
financial disasters – Barings, Kidder, Daiwa, etc. – were operational risks and not financial
risks. More recent corporate failures such as Enron and WorldCom, as well as the market-
timing and late-trading problems plaguing the mutual fund industry, have reinforced the
importance of ORM. In the aftermath of these disasters, the standards for corporate
governance and risk management have increased. These new standards impact not only
corporate executives and boards, but also key stakeholders such as stock analysts, rating
agencies, and regulators.

x Regulatory actions. In response to the corporate disasters, regulators have dramatically


increased their examination and enforcement standards. New regulations with significant
operational risk requirements include Sarbanes-Oxley (in particular, Section 302 on
certification of chief executives and chief financial officers and Section 404 on internal
controls), the Patriot Act, anti-money laundering and bank secrecy acts, and other corporate
governance rules adopted by the stock exchanges. Additionally, the new Basel initiative
(Basel II) has established a direct linkage between minimum regulatory capital and a bank’s
underlying risks, including explicit treatment for operational risk.

x Industry initiatives. A number of industry initiatives have been organized around the world to
establish frameworks and standards for corporate governance and risk management. These
initiatives include the Treadway Report (United States, 1993) that produced the Committee
of Sponsoring Organizations (COSO) framework of internal controls, while the Turnbull
Report (United Kingdom, 1999) and the Dey Report (Canada, 1994) developed similar

1 President, James Lam & Associates; founding member, Blue Ribbon Panel, PRMIA; and Senior Research Fellow,
Beijing University.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 1
The PRM Handbook – III.C.2 Operational Risk Process Models

guidelines. It is noteworthy that the Turnbull and Dey reports were supported by the stock
exchanges in London and Toronto, respectively. In 2004, COSO is scheduled to release a
major study on enterprise-wide risk management (ERM), which will include key ERM
principles and advocate its application within a sound corporate governance framework.

x Corporate programs. Corporations have achieved significant benefits from their risk
management programs; among these are stock price improvement, debt rating upgrades,
early warning of risks, loss reduction, and regulatory capital relief. While ORM programs
are relatively new, early adapters have reported sustained reduction in operational losses and
error rates (one company reported a sustained 80% reduction in operational risk losses).
Other reported benefits include improved customer service and operational efficiency.
These results demonstrate that investments in operational risk controls can produce direct
benefits that are multiples of the costs, as well as indirect benefits such as prevention of
crises that divert management attention and cause reputational damage.

x Technology developments. Over the past decade, technology developments have transformed
how businesses operate. Examples include using the Internet to communicate with
customers and facilitate commerce; developing customer relationship management
applications to better serve customer segments; and outsourcing IT operations and business
processes to improve efficiency. In risk-intensive industries, such as financial and energy
services, corporations have also developed sophisticated models and databases to measure
all types of risk. While these technology developments provide business benefits, they also
present new and complex risks, such as information security, data integrity, cyber-crime,
cyber-terrorism, systems availability, and model risk. These risks require operational risk
controls for day-to-day operations, as well as disaster recovery planning for unlikely but
potentially disastrous events.

Going forward, the key trends and developments highlighted above should continue to assert
significant pressure on corporate boards and executives to improve their risk management
capabilities, especially in the area of operational risk.

The focus of this chapter is on the development and application of operational risk process
models. We will discuss the following questions:
x How to develop and apply operational risk process models?
What are the specific quantitative and qualitative tools used by companies today?
x How to link these tools with economic capital allocation?
x What are the actions management can take to mitigate operational risk?

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 2
The PRM Handbook – III.C.2 Operational Risk Process Models

At the end of this chapter, we will use IT outsourcing as an example to illustrate how an
operational risk process can be established.

III.C.2.2 The Overall Process


In developing and applying operational risk process models, risk managers should first take
advantage of other related programmes that can provide valuable information or tools. While the
discipline of ORM is relatively new, businesses have always had to ensure that their operations
are effective and efficient. As such, many companies have implemented programmes to identify,
monitor, and improve their business processes. These programmes often fall under the monikers
of re-engineering or total quality management, and these corporate-wide efforts produce detailed
process maps and performance metrics. In addition to process improvement, companies have
also implemented risk assessment processes to identify key operational risks. These risk
assessments are either performed by the business and operating units themselves (known as
control self-assessments), or by independent internal or external audit groups.

As a starting point, the methodologies and results from these initiatives – process maps,
performance metrics, audit ratings – can be used to gain a deeper understanding of the general
scope and specific issues that the ORM program must address. With this knowledge, the
development of operational risk process models should include the following four steps:

x Step 1: Establish the objectives and requirements of key stakeholders. The design of operational risk
process models should always start with the end goal(s) in mind: what are the key business
and operational objectives for the company? These objectives can generally be grouped into
three categories, business performance, financial performance, and compliance. Business
performance objectives include product innovation, customer acquisition and retention, and
market share. Financial objectives include earnings growth, risk-adjusted profitability, and
shareholder value. Compliance objectives should encompass internal risk policies and limits,
as well as external regulatory and legal requirements. For example, one of the key objectives
of a capital markets trading business is to maintain their market risk exposures within board-
approved risk policy limits.

x Step 2: Identify the core processes that support these objectives. Most companies view their businesses
vertically in terms of operating units, support functions, products, or customer segments.
However, companies must manage their business processes horizontally to fully address the
operational risks that may prevent them from achieving their key objectives. This is because
the core processes of any company – customer acquisition, product delivery, cash

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 3
The PRM Handbook – III.C.2 Operational Risk Process Models

management, etc. – involve the participation of various entities within and outside of the
organization. To better understand these linkages, process maps should be developed for the
core processes of the company. These process maps should be driven by the objectives of
the company, and highlight the specific interdependencies, such as work flows, data flows,
and/or cash flows. It is also important to note that risk management is a process. As an
example, Figure III.C.2.1 shows a process map for daily market risk measurement. This
process map shows the work flows between the front office (traders), the middle office (risk
management), and the back office (accounting and IT). It also shows two time-critical
objectives: an account reconciliation between the open positions report from the front office
and the accounting report by 10 a.m., and a daily market risk report showing risk exposures
against limits by 4.30 p.m.

Figure III.C.2.1: Daily market risk measurement process map


Front office (Trading)

Open Formulate and


positio ns Deal Execute H edgin g
report capture Strategies

Exception
Market
Manag ement
4:30 pm
Middle office (Ri sk

10:00 am risk
Yes and Reporting
datab ase
Manag ement)

Market Daily Risk


Account Limits No
Risk Report v s.
Recon ciliation Exceed ed Daily
System Policy Limits
Financial ? Reporting
market and Revie w
datab ase
(Accounting /IT)
Back office

Account- Daily
ing data Accountin g Back
Accruals Testing

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 4
The PRM Handbook – III.C.2 Operational Risk Process Models

Step 3: Define performance and risk metrics, including goals and MAPs. For each core process of the
company, performance metrics and risk metrics should be clearly defined. For example, the
systems availability of a core application is essential for day-to-day operations. A company might
set 100% systems availability as a goal and 99.99% as minimum acceptable performance (MAP).
Similarly, goals and MAPs should be established for all key performance and risk metrics. As
such, all of the company’s operations can be monitored against specific benchmarks. Over time,
management can respond proactively to specific processes that perform below MAP. For
processes that perform consistently above goal, then the goal and MAP for those processes can be
raised to encourage continuous improvement. To follow on with our example, an operational risk
metric for the daily market risk measurement process may be the percentage of time that the daily
market risk report is produced by 4.30 p.m. Management can establish 99% as the goal and 95%
as the MAP. As such, the goal is to produce the daily market risk report by 4.30 p.m. on 250 out
of the 253 trading days in a year, with a MAP of 240 days.

Step 4: Implement organizational and risk mitigation strategies. With a clear understanding of stakeholder
objectives and supporting core processes, and performance of those processes against
performance standards, the company is well positioned to execute the appropriate ORM
strategies. These strategies may include: new training programs; new IT applications; process
redesigns; management restructuring; integration of audit, compliance, security and ORM
activities; specific investigations and corrective actions; and risk transfer through insurance
programmes.

In our example, suppose management noticed that the daily market risk report was late 4 times in
a month, a below-MAP performance given that the frequency is greater than 13 times per year.
An investigation revealed that the main reason, or root cause, is that the traders are late in
updating their daily trades. Risk mitigation strategies may include discussion forums to resolve
any misunderstandings or conflicts, or hiring new trading assistants to support the traders.
General Electric is well known for its ‘workouts’ in which cross-functional teams are organized to
discuss and resolve any operational issues in an open forum. To highlight the importance of this
process, senior executives usually attend the last session of these workouts, to obtain an in-
person report from the team leaders on how they plan to address any outstanding issues.

More sophisticated companies go beyond these four steps in their ORM programmes, and
allocate economic capital to each business unit based on its operational risks. The direct linkage
between capital requirements and operational risks is one of the key developments in Basel II.
The allocation of capital to operational risks provides a number of benefits:

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 5
The PRM Handbook – III.C.2 Operational Risk Process Models

x Management can measure risk-adjusted profitability consistently across different business


units and products. In fact, performance models that do not fully adjust for risks (such
as economic value added models) would overstate the profitability of high-risk
businesses and understate the profitability of low-risk businesses.
x Organizational incentives, in the form of lower capital charges, are provided to business
units that effectively manage their operational risks. One of the key objectives of any
risk model is to motivate appropriate behaviour.
x In the evaluation of risk transfer strategies, such as insurance, management can compare
the cost of risk retention (i.e., economic capital times the cost of capital) and the cost of
risk transfer (i.e., net cost of the insurance strategy).

As we will discuss later, the allocation of economic capital to operational risk will enhance the
evaluation of the costs and benefits of these strategic alternatives.

III.C.2.3 Specific Tools


Given the wide scope of operational risk, a company should employ a range of qualitative and
quantitative tools to assess, measure, and manage operational risks. Below is a summary of the
basic ORM tools that companies use today:

(i) Loss-incident database. A company should record operational losses and also keep a record of
operational incidents for two main reasons. First, losses are measurable and can be used to
indicate trends (e.g., trend in the loss/revenue ratio). Incidents record other events that should be
noted, even if they did not result in an operational loss. Second, every loss and incident within a
company represents a learning opportunity, without which past mistakes are more likely to be
repeated. As such, the loss-incident database should be used to support the identification of
operational risk exposures, the development of risk-based audits (in which high-risk business
units are audited more frequently), as well as to facilitate the sharing of lessons learned within the
company. Additionally, there are several industry initiatives to develop more robust loss-event
databases, but it is too early to tell which one(s) will become the industry standard. It is unlikely,
however, that the management of operational risk will ever become a wholly data-driven process;
given the nature of operational risk, it will always be more of a management issue than a
measurement issue.

(ii) Control self-assessment. A control self-assessment (as distinct from a risk self-assessment – see
Section II.C.2) is an internal, subjective analysis of the key risks, the controls available to mitigate
these risks, and the management implications. It is important for all of the business units to
assess their current situation in terms of a control self-assessment to develop a clear picture of

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 6
The PRM Handbook – III.C.2 Operational Risk Process Models

how to proceed in the ORM process. Given that they fully participated in the assessment
process, they would also have a greater sense of ‘ownership’ to address outstanding opportunities
or issues. Tools that support self-assessments include questionnaires, issue-specific interviews,
team meetings, and facilitated workshops. The output is an inventory of key risk exposures, key
control initiatives, and sometimes even a Letterman-style ‘top 10 risks’. The following are
questions that might be included in a control self-assessment:
1. What are the key business and financial objectives for the business unit in the next 12
months?
2. What are the key risks that may prevent you from attaining these objectives?
3. What policies, procedures and controls do you have in place to ensure that risks are
within acceptable levels?
4. What new risk management initiatives do you have planned for the next 12 months?
5. What metrics, tests and reviews provide you with assurance that the policies,
procedures and controls specified in 3 and 4 above are indeed effective?

(iii) Risk mapping. Building on the work from control self-assessments, the company’s key risk
exposures can be ranked with respect to their ‘probability’ and ‘severity’ so that management can
have a comparative view in the form of a two-dimensional risk map. Figure III.C.2.2 shows an
example of a risk map. Some ORM professionals argue that companies should be most
concerned about events of low probability and high severity because management does not have
sufficient experience in dealing with these events, whereas they have more experience in dealing
with events of high probability and high severity. For operations that are more complex (e.g.,
outsourcing arrangements, special-purpose vehicles), risk-based process maps can be produced to
show how various risk exposures can arise. These maps will aid in the identification of the risks
encountered in each business unit, indicating ‘problem spots’, such as single points of failures or
where errors often occur. These maps will also enable each business unit to develop and
prioritize their risk management initiatives to address the most important risks.

(iv) Key risk indicators. Risk indicators are quantitative measures that are linked to operational risks
for a specific process. Examples include customer complaints for a sales or service unit, trading
errors for a trading function, unreconciled items for an accounting function, or system downtime
for an IT function. These risk indicators are usually developed by the individual business units
and closely tied to their business objectives. Early-warning indicators should also be developed
to provide management with leading signals (e.g., employee absenteeism and turnover as an early
warning indicator of future operational errors). As discussed earlier, the establishment of goals
and MAPs will provide useful performance benchmarks against which the key risk indicators can
be measured.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 7
The PRM Handbook – III.C.2 Operational Risk Process Models

Figure III.C.2.2: Risk Map

Risk Probability Severity High


1. Investment/Credit High High 10 1
2. Human Capital Low High

3. Valuation Medium High


3
4. Liquidity–Funding Medium Medium 4 9
7 6
5. Operational Low High Probability

6. Reputational Medium High


7. Economic Medium High
8. Compliance Low High 2
5 8
9. Leverage Medium Medium Low
10. Interest Rate High Low Low High
Severity

Other sources of valuable information for risk identification and assessment include internal
audit reports, external assessments (external auditors, regulators), employee exit interviews, and
customer and employee surveys.

Operational risk professionals also find it useful to distinguish between key risk indicators (KRIs)
and key risk drivers (KRDs). KRIs are ex-post indicators of operational risk performance, in that
management has no direct control over their outcomes. On the other hand, KRDs are levers
that management has direct control over. Examples of KRDs include number of training hours,
number of automated versus manual processes, time to fill open positions, and time to resolve
outstanding audit findings. As such, KRDs can be best thought of as controllable factors that
will influence future KRIs.

III.C.2.4 Advanced Models


When ORM first came on the scene a few years ago, there were basically two distinct schools of
thought. One school subscribed to the notion that you cannot manage what you cannot
measure, and they focused on quantitative tools such as loss distributions, risk indicators, and
economic capital models. The other school believed that operational risk cannot be quantified
effectively, and they focused on more humanistic, qualitative approaches such as self-
assessments, risk maps, and audit findings.2 Today, operational risk professionals realize that
best practices must integrate both quantitative and qualitative tools.

2 See Lam (2003a).

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 8
The PRM Handbook – III.C.2 Operational Risk Process Models

In addition to the basic risk identification and assessment tools discussed above, leading
companies employ advanced operational risk models. Unlike market risk and credit risk, where
risk measurement methodologies have been developed and tested for many years, there are no
widely accepted models for operational risk measurement. In selecting a methodology (or
combination of methodologies), each company should first establish its objectives and resources
and choose accordingly. Different methodologies imply different interpretations of operational
risk, and require various inputs to be useful. Given that there is likely to be no single solution, a
combination of methodologies will allow the disadvantages of one model to be balanced by the
strengths of another, allowing a more robust overall measurement to be developed. Some of the
most common methodologies, including their strengths and weaknesses, are discussed in this
section (see Hernandez et al., 2000, for a detailed discussion of key strengths and weaknesses of
various ORM models).

III.C.2.4.1 Top-down models


The top-down approach to operational risk assessment calculates the ‘implied operational risk’ of
a business by using data that are usually readily available, such as the overall financial
performance of the company or that of the industry in which it operates. Top-down models use
relatively simple calculations and analyses to arrive at a general picture of the operational risks
encountered by a company. These top-down models benefit from the sophisticated
methodologies already developed for credit and market risk. Examples of top-down models on
operational risk include the implied capital model, the income volatility model, the economic
pricing model, and the analogue model:

(i) Implied capital model. This methodology assumes that the domain of operational risk is ‘that
which lies outside of credit and market risk’. Thus, the capital allocated to operational risk must
be the result of subtracting the capital attributable to credit and market risk from the total
allocation of capital. Although this model provides an easily calculated ‘number’ for operational
risk, its simplicity presents several disadvantages. First, total risk capital must be estimated given
the company’s actual capital and the relationship between its actual debt rating and target debt
rating. Second, it ignores the interrelationships between operational risk capital and market risk
and credit risk capital. Finally, this model does not explicitly capture the causes and effects for
operational risk.

(ii) Income volatility model. This model is similar to the capital allocation model, but it goes one step
further, by looking at the primary determinant of capital allocation – income volatility. The
volatility attributable to operational risk is calculated in the same way as in the capital allocation
model – by subtracting the credit and market risk components from the total income volatility.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 9
The PRM Handbook – III.C.2 Operational Risk Process Models

One of the advantages of this model is that of data availability: historical credit and market risk
data are usually easily obtained, and total income volatility can be observed. However, this model
also has several shortcomings, the most dramatic of which is that it ignores the rapid evolution of
firms and industries. Structural changes, such as new technologies or new regulations, are not
captured in this model. The income volatility model also fails to capture softer measures such as
opportunity costs or reputation damage. In addition, it fails to capture the low-frequency, high-
severity risks, as is true in all of the top-down approaches.

(iii) Economic pricing model. The capital asset pricing model (CAPM) is probably the most widely
used of economic models, and can be used to determine a distribution of the pricing of
operational risk relative to the other determinants for capital (see Chapter I.A.4). The CAPM
assumes that all market information is captured in the share price; thus the effect of publicized
operational losses can be determined by evaluating the market capitalization of a company. The
advantage of this approach is that it incorporates both discrete risks and softer issues such as
reputational damage and effects of forgone opportunities. With this approach, a company’s
stock price volatility due to operational risk is derived by taking the company’s total stock price
volatility and subtracting from it the stock price volatility due to credit risk and market risk.
However, the CAPM approach presents an incomplete and simplistic view of operational risk. It
provides only an aggregate view of capital adequacy, not information about specific operational
risks. Furthermore, the level of operational risk exposure is not affected by particular controls
and business risk characteristics, so there is no motivation to improve operations, and while tail-
end risks are incorporated in the model, they are not thoroughly accounted for. This is a
significant omission. Such incidents can do more than just diminish the value of a business: they
can lead to the end of the business completely. Finally, this model does not help in anticipating,
and therefore avoiding, incidents of operational risk.

(iv) Analogue model. The analogue model is based on the assumption that one can look at external
institutions with similar business structures and operations to derive operational risk measures for
one’s own organization. This model can be extended to look for the causes and effects of
operational losses at such institutions. This method offers one way to proceed when a company
does not have a robust database of operational risk losses. However, it takes some credulity to
assume that the high-level numbers of another institution can accurately measure one’s own
operational risk, and many are suspicious of this approach. In the words of one analyst: ‘[The]
intangibles within an institution – its risk-taking appetite, the character of its senior executives,
the bonus structure of its traders – put so many wild cards into the operational risk equation that
similarities in business volume, transaction volume, documented risk policies and other qualities
that can be scored are swamped.’

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 10
The PRM Handbook – III.C.2 Operational Risk Process Models

III.C.2.4.2 Bottom-up models


The bottom-up methodology applies loss amounts and/or causal factors to predict operational
losses in the future. It requires a company to clearly define the different categories of operational
risk that it faces, gather detailed data on each of these risk categories and then quantify the risk. A
company often needs to augment its internal data with an external loss-event database. The final
output of this bottom-up approach is a loss distribution that enables operational risk capital to be
estimated for a given confidence level (see Chapter III.C.3) A number of surveys have indicated
an increasing preference for risk-based bottom-up methodologies over the top-down approaches.
The Basel II requirements should further encourage banks to develop bottom-up models. The
data needed for this methodology can also be used to derive a business risk profile. For example,
turnover or error rates can be tracked over time and combined with changes in business activities
to construct a more robust picture of the business operational risk profile. By tracking these
KRIs over time, the company can assess its operational risk exposure on an ongoing basis and
can upgrade specific controls as needed. Furthermore, continuous tracking provides a company’s
management with better information about its operations and increases awareness of the causes
of operational risk.

However, bottom-up models present several difficulties. Mapping loss data from the company
with loss data from other companies is complex, given the differences in business mix, size,
scope and operating environment. Even mapping internal losses to specific risk types is difficult
because losses are frequently reported as aggregates from multiple risk sources that are difficult
to isolate. For example, an operational loss on a trading floor might result from personnel risk,
lack of trading controls, expanding overseas business, lack of back- and front-office segregation,
volatile markets, senior management confusion, and incompetence. In addition, robust internal
historical loss data may not be available, particularly for low-frequency, high-severity events.

Bottom-up models are usually based on statistical analysis and scenario analysis. Classical
statistical models require an ample supply of operational loss data that are relevant to the
business unit. The lack of appropriate internal data is therefore the greatest obstacle to the
widespread application of this methodology; the use of external data as a proxy poses several
problems, as mentioned earlier. However, the analytical power of this tool will hopefully become
more widely applicable in the near future as increased awareness of operational risk leads to
improvements in data collection and extensions of the classical statistical methodology.

Scenario analysis offers several benefits that are not addressed by the classical statistical models.
A scenario analysis is used to capture diverse opinions, concerns and experience/expertise of key
managers and represents them in a business model. Scenario analysis is a useful tool in capturing

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 11
The PRM Handbook – III.C.2 Operational Risk Process Models

the qualitative and quantitative dimensions of operational risk. Risk maps allow the
representation of a wide variety of loss situations, and capture the details of the loss scenarios
envisioned by the managers surveyed. Risk maps of each business unit identify where operational
risk exposures exist, the severity of the associated risks, whether any controls are in place, and the
type of control: damage, preventive, or detective. Cause and effect relationships can be captured
with this methodology. The shortcoming of such a model, however, is in its subjectivity, which
creates a potential for recording data inconsistently and/or for biasing conclusions if one is not
careful.

At the beginning of this section we discussed the need to balance the qualitative and quantitative
tools. For example, control self-assessments require that business units are honest and forthright
about their major operational risks, which can often be embarrassing problems that they would
rather not discuss (let alone highlight for senior management!). To counterbalance this
shortcoming, business units should be required to not only ‘tell me’ but also to ‘show me’. This
can be accomplished through validation processes, such as:
x pre-established operational risk indictors that are monitored against goals and MAPs;
x periodic tests to ensure that actual losses and incidents (ex post) result from
operational risks that were being monitored through KRIs or at least discussed in
the control self-assessments (ex ante);
x comparisons between control self-assessments and independent assessments such as
internal audits, external audits, regulatory reviews, and customer surveys.

In fact, Section 404 of the Sarbanes-Oxley Act requiring management assessment of internal
controls for financial reporting, as well as auditor attestation, was designed to ensure such
validation.

III.C.2.5 Key Attributes of the ORM Framework


Today, ORM practitioners recognize the pitfalls of using only one approach to modelling
operational risk – either top-down or bottom-up – and that best practice ORM incorporates
elements of both approaches. We will now discuss the attributes of a unified ORM framework,
and then how these attributes can underpin a seven-factor economic capital model.

A unified ORM framework should satisfy two basic requirements. First, it should support both
the measurement and management of operational risks. Second, the ORM framework should
incorporate the interdependencies across credit, market and operational risks as part of an overall
ERM program. Based on these two requirements, the key attributes of a unified ORM
framework include the following:

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 12
The PRM Handbook – III.C.2 Operational Risk Process Models

(i) Integrating qualitative and quantitative tools. The nature of operational risk (i.e., the risk of loss due
to people, processes, systems and external events3 is complex and dynamic. As such, the
advantage of qualitative tools is that they can incorporate human experience and judgement in
order to capture risks that are subjective. For example, what are the operational risks associated
with a new product? On the other hand, the advantage of quantitative tools is that they provide
objective indicators that can be used to show aggregate losses, exposures, and trends against
established targets. A unified ORM framework should incorporate both advantages, as well as
integrate the institution’s various risk management and oversight activities (e.g., ORM, audit,
compliance, quality, insurance).

(ii) Providing early warnings and escalations. Operational risk cannot be managed effectively based
only on backward-looking indicators such as losses, error rates, and incidents. The ORM
framework should provide early warning indicators of emerging risk issues. A quantitative
example is that an increase in employee absenteeism may be an early warning for increasing
turnover and human errors. A qualitative example is competitive intelligence that indicates
significant investments in a new technology by a key competitor that, if successful, would render
the firm’s existing technology obsolete. An ORM framework should establish early warning
indicators, as well as effective escalation processes so that management can take the appropriate
actions. For example, a money management company established specific escalation processes
such that the higher the number of customers impacted by an incident, the higher the level of
management is notified. This ensures that ‘bad news travels up’ the organization and that the
appropriate level of management responds in a timely manner.

(iii) Influencing business activities. One of the most important attributes of an ORM framework is
that it influences business actions and decisions. Such influence can be asserted through: (1)
corporate policies with respect to guidelines for, and restrictions on, business activities, such as
acceptable versus unacceptable sales practices; (2) teamwork between the line units and ORM in
new business and product development processes; (3) risk response plans based on ORM
indicators and escalations; (4) adjustments in economic capital given operational risk performance
and risk mitigation strategies; and (5) positive and negative incentives to motivate appropriate
business behaviour. This attribute ensures that operational risks are managed on an ongoing
basis, and that specific consequences are in place to provide organizational reinforcements. As
excellent example of using positive incentives is when GE tied one-third of senior management
compensation to the achievement of quality management objectives as part of the company’s ‘six
sigma’ programme.

3 The definition of operational risk in this chapter includes business risk, which is notably absent in Pillar I of the Basel
II proposals.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 13
The PRM Handbook – III.C.2 Operational Risk Process Models

(iv) Reflecting environmental changes. Just as credit risk and market risk frameworks reflect changes in
underlying default rates and market prices, an ORM framework should reflect changes in the
operational risk environment. For example, increases in industry-wide operational risk losses and
incidents may indicate an increase in systemic risk. A number of industry loss-event databases
are being developed that can provide this type of information. Other environmental changes
include new legal and regulatory requirements, such as those established by the Sarbanes-Oxley
Act, the Patriot Act and the Basle II proposals. A company that lacks the processes and systems
to comply with these new requirements is likely to face greater operational risk with respect to
regulatory scrutiny and legal penalties.

(v) Incorporating risk interdependencies. There are important interdependencies within and across risk
types. For example, credit risk is the primary concern for most banks, but inadequate loan
documentation (an operational risk) is likely to increase loss severity in the event of a borrower
default. An ERM programme should address such interdependencies in the design of early
warning indicators, the development of scenario analysis, and the implementation of risk
response plans. For example, financial institutions must simultaneously manage market risk and
operational risk during stressed market conditions (e.g., the Russian crisis during the autumn of
1998). Given that there is a high correlation between volatile prices (a key driver for market risk)
and transactional volumes (a key driver for operational risk) during stressed periods, financial
institutions should establish early warning indicators and risk response plans. Examples of early
warning indicators are shown in Figure III.C.2.3. If these indicators exceed a critical level,
management should implement pre-established contingency plans, such as reduction of trading
limits to reduce market risk exposures and activation of back-up sites to increase processing
capacity.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 14
The PRM Handbook – III.C.2 Operational Risk Process Models

Figure III.C.2.3: Early warning indicators

Risk Category Early warning indicators

• Borrower/counter party stock price declines


Credit Risk
• Widening of credit spreads in the debt and credit derivatives markets

Market • Increases in actual and implied price volatilities


risk • Breakdowns in historical price relationships and patterns

Business/ • Spikes in business growth, profitability, and complexity/change


Operational
• High and undesirable turnover rates
Risk

Enterprise- • Increases in any risk concentrations and/or organizational powers


wide Risk • Changes in intra- and inter-risk correlations

As we will discuss in the next section, these interdependencies should also affect the
determination of economic capital, and in ways that might not be obvious.

III.C.2.6 Integrated Economic Capital Model


Given the five attributes of a unified ORM framework discussed above, what factors should
determine economic capital for operational risk? Figure III.C.2.4 shows a seven-factor approach
to calculating operational risk capital. While this is conceptual, it can be adapted to a firm’s
specific business mix, size, and risk profile.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 15
The PRM Handbook – III.C.2 Operational Risk Process Models

Figure III.C.2.4: Operational risk capital calculation

2
• Revenue volatility
1 Operating • Fixed vs. variable
Margin
expenses
Revenue Multiplier • Losses/incidents
3
Internal • Risk indicators
Indicators • Risk assessments
• Audit ratings
4 • Customers
External
• Regulators
Indicators
• Event risk exposures
Op er ational
Ris k C apital 5
Model • Model reliance
Risk • Back test results

6
Systemic • Industry loss experience
Risk • Banking and settlement
failures

7 Credit Risk
Capital
Financial Risk Captures linkages
Capital Multiplier between operational risk
and financial risk
Market Risk
Capital

Let us discuss each one of these factors in turn.

(i) Revenue multiplier. This is a top-down estimate of the amount of operational risk capital
required by a business or operating unit. Such an estimate can be derived from observing
analogues of publicly traded companies in same or similar businesses, while adjusting for market
risk and credit risk. For example, Capital One may be a credit card company analogue, while
First Nationwide may be one for mortgage companies. Outsourcing firms such as IBM or EDS
may be analogues for internal IT functions. The central question is ‘if the business or operating
unit were a standalone business, how much capital would it need for operational risk capital?’
The revenue multiplier4 assumes an average operational risk profile, which can then be adjusted
upwards or downwards by the other factors below.

(ii) Operating margin. This factor incorporates the degree to which the firm’s operating margin is
more or less volatile than average, and is often referred to as ‘business risk’. A firm’s inability to

4 For certain businesses, a top-down proxy based on activity or volume might be more appropriate.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 16
The PRM Handbook – III.C.2 Operational Risk Process Models

generate sufficient revenue to cover expenses (net of unexpected credit and market risk losses) is
a major reason why it needs to hold operational risk capital. For example, business variables that
can increase the required operational risk capital include greater volatility in business volume,
weak power to set prices, and higher fixed versus variable expenses.

(iii) Internal indicators. This adjustment reflects the effectiveness of internal controls. A scorecard
should be developed for the internal quantitative and qualitative indicators, with individual
weightings, that would provide an overall adjustment to operational risk capital. Internal
indicators would include losses, incidents, risk metrics (e.g., error rates, unreconciled items), early
warnings, internal audit ratings, risk maps, etc. The economic impact of contingency plans and
insurance programmes should also be factored in. Each key indicator should also be associated
with specific goals and MAPs.

(iv) External indicators. As with internal indicators, a scorecard of external indicators should be
developed. External indicators would include customer satisfaction scores and complaints,
external audit comments, and regulatory exam findings. This scorecard would also track
exposures to external events, such as fires, earthquakes and acts of terrorism. Firms that rely on
external vendors should also incorporate vendor performance relative to service level agreements.
Goals and MAPs for external indicators should also be established.

(v) Model risk. This factor reflects the degree to which a firm relies on models, and the quality of
such models. The primary input is back-testing results against predetermined criteria. A firm
should include all models that drive management decisions and actions, such as pricing and
valuation models, scenario and simulation models, and risk management models. For firms that
do not rely on models, this may simply be one of the internal indicators.

(vi) Systemic risk. This factor adjusts for dramatic shocks in the business environment, such as
industry-wide losses and incidents, and banking and settlement failures. Systemic risk is
especially important for highly interconnected industries such as financial services and energy
services, where trading activities and counterparty exposures within the industry are significant.
Past examples include the Long-Term Capital Management collapse, Y2K readiness, and the
Enron bankruptcy. In each of these situations, companies were concerned not only about their
direct exposures, but also the exposures of their business partners and counterparties.

(vii) Financial risk multiplier. This factor is meant to capture the compounding effects between
operational, credit, and market risks. It is not portfolio diversification, which may lead to a
reduction in aggregate economic capital at the enterprise-wide level. In fact, it is a compounding

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 17
The PRM Handbook – III.C.2 Operational Risk Process Models

factor that many risk managers ignore. Regulators refer to this compounding factor as ‘spillover
effects.’ Cumming and Hirtle (2001) argued that the confluence of variables including market
liquidity problems, lack of corporate limberness, and reputational and contagion effects, could
result in the aggregate risk of a firm exceeding the sum of its individual risks. The financial risk
multiplier is meant to capture such spillover effects. An argument can also be made that a variety
of operational risk exposures (e.g., rogue trader, inadequate loan documentation, unsavoury sales
practices) are compounded in a firm with significant market risk and credit risk exposures. After
all, a rogue trader can do much more damage at a bank than at a retail store.

The practice of ORM has come a long way in the past several years. It still has a long way to go.
At the annual 2003 operational risk conference organized by the Risk Management Association,
Eric Rosengren of the Federal Reserve Bank of Boston said that only three of the 20 largest US
banks qualify for the ‘advanced management approach’ for operational risk under Basle II, which
is supposed to lead to reduced capital charges. However, the development of ORM is more than
a regulatory compliance issue. Early adopters of more sophisticated ORM have reported
significant business benefits, including improved customer service, greater operating efficiency
and reduced losses. To fully realize these benefits, it is clear that the further development of
ORM practices must integrate quantitative and qualitative tools.

III.C.2.7 Management Actions


Assessing and measuring operational risk is important, but pointless unless directed towards the
improved management of operational risk by enhancing internal controls and controlling key risk
factors. Simply stated, the goal of ORM is to help management to achieve their business
objectives. Once a measurement framework is in place, the next step is to implement a process
that identifies actions that will reduce operational losses. These actions include adding human
resources, increasing training and development, improving and/or automating processes,
changing organizational structure and incentives, adding internal controls (e.g., more frequent or
more extensive monitoring), and upgrading systems capabilities.

The key to effective operational risk mitigation is to establish a cross-functional rapid response
team that will address and resolve any emerging operational risk issues. At one business unit at
Fidelity Investments these teams were called ‘turbo teams’, and they responded immediately
when operational risk indicators fall below MAP and reported back to management on their
assessments and actions within a few days or weeks.

Finally, a mechanism for evaluating and prioritizing potential improvements must be created.
Cost–benefit analysis and readiness assessments are useful tools that should be included in the

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 18
The PRM Handbook – III.C.2 Operational Risk Process Models

evaluation process. For example, business executives often turn to IT, or more specifically
automation, as the answer to process improvements. However, the potential benefits must be
weighted against the total costs of the project (e.g., development, testing, training,
implementation and ongoing maintenance). More importantly, management must ensure that the
organization is ready to take advantage of the technology solutions. Automation of poorly
designed processes can result in significant operational risks in the future.

Some of the operational risk measurement approaches discussed above should naturally lead to
improved operational risk management at the business unit level. A business unit can monitor
and improve its operational risk levels by setting operational goals, exposure limits and MAPs on
key operational processes. For example, suppose a brokerage group on average processes a
million trades per day. This group may specify that its operational goal is that failed trades be less
than less than 50 per day, while its MAP is no more than 100 failed trades per day. Additionally,
the group may specify that no more than 40% of daily trades can be processed by one
operational centre in order to spread its reliance across multiple operational centres.

The allocation of economic capital for operational risk, if it captures both performance and
behaviour effects, should motivate business units to improve their ORM in order to reduce their
capital charges. For example, a business may set up procedures through which employees may
respond immediately to operational problems and implement the controls necessary to monitor
and improve performance. A key requirement for risk mitigation is to understand the root causes
of operational risks, such as lack of training or inadequate systems, and then focus corrective
actions on these root causes. Business units that take the appropriate actions should receive a
‘credit’ in their economic capital charges.

One of the key objectives of any ORM programme is to ensure that ‘bad news travels up an
organization’. In other words, the quantification and modelling of operational risk should lead to
more timely communication and escalation of operational risk issues. This can be accomplished
through the various process models and quantitative tools discussed above. Additionally,
management should clearly communicate when they should be informed through a cascading set
of ‘escalation triggers’, which would lead to the appropriate decisions and actions on the part of
management. Escalation triggers can be defined in terms of the KRIs, such as the level of
operational losses, the number of errors, significant policy violations, and number of customers
impacted by an incident. These escalation triggers and procedures should be incorporated into
the company’s policies and procedures, as well as training programmes and on-line support tools,
so that all employees understand what is expected of them.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 19
The PRM Handbook – III.C.2 Operational Risk Process Models

Besides risk mitigation through operational processes and controls, and economic capital
incentives for ORM, there are other financial solutions that management may consider.
Companies can establish reserves to cover their expected operational losses. These reserves are
considered a form of self-insurance. Expected losses should be embedded in the pricing of a
product, indeed market and credit risks are already incorporated into some transaction prices as a
matter of practice. Including an additional adjustment for operational risk makes for a more
comprehensive picture and allows for more accurate risk-adjusted pricing. For example, if a
business unit performs 10,000 transactions annually, with an expected loss of $80,000 a year due
to operational factors, then an adjustment of $8 per transaction could cover such losses.

Additionally, the cost of capital for operational risk (and other risks) should be incorporated into
the pricing of a transaction. Pricing can also be driven by the target levels of returns that the
company expects a product to achieve given competitive pricing and market share objectives.

III.C.2.8 Risk Transfer


For critical operational risk exposure, a company must decide if the best strategy is to implement
internal controls and/or execute risk transfer strategies. The two are not mutually exclusive and
are often complementary. For example, most companies implement workplace safety procedures
(an internal control) and purchase workers’ compensation insurance (a risk transfer strategy). In
fact, the former can reduce the cost of the latter. Another example is product liability. A
company can strengthen product development controls as well as purchase product liability
insurance.

Some risk transfer strategies are intended as ‘backstops’ to internal controls. For example,
directors’ and officers’ liability insurance provides protection against ‘wrongful acts’. In the past,
insurance managers would purchase such ‘backstop’ insurance policies based on the structure,
cost, and provider rating and service level. In the context of ERM and ORM, a company should:

x identify its operational risk exposures and quantify its probabilities, severities and economic
capital requirements;
x integrate its operational risk with its credit risk and market risk in order to assess its
enterprise-wide risk–return profile;
x establish operational risk limits (e.g., MAPs, economic capital concentration);
x implement internal controls and develop risk transfer and financing strategies;
x evaluate alternative providers and structures based on cost–benefit economics (i.e.,
comparing the cost of risk retention and risk transfer).

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 20
The PRM Handbook – III.C.2 Operational Risk Process Models

The economic capital framework discussed above is also a useful tool for evaluating the impact
of different risk transfer strategies. For example, in executing any risk transfer strategy the
economic benefits include lower expected losses and reduced loss volatility, while the economic
costs include insurance premiums, as well as higher counterparty credit exposures. In a sense, the
company is both ceding risk and ceding return, resulting in a ‘ceded RAROC’. By comparing the
ceded RAROCs of various risk transfer strategies, a company can compare different structures,
prices and counterparties on an apples-to-apples basis and select the most optimal transaction(s).
Moreover, a risk transfer strategy with a ceded RAROC below the firm’s cost of equity would
add to shareholder value, and vice versa. Figure III.C.2.5 provides the framework for a ceded
RAROC analysis.

Figure III.C.2.5: Ceded RAROC analysis

Different Structures
Structu res Common Cost/Benefit Framework

' Return
Ceded RAROC =
' Economic Capital
Derivatives
ƒ ' Return
– Pay cashflows or insurance
premium
– Include transaction and ongoing
Structured Finance management costs
– Reduce Economic Capital
‘benefit’

ƒ ' Economic Capital


– Reduce Economic Capital held
Insurance for risk
– Increase Economic Capital
counterparty exposure
– Increase operating risk Economic
Capital

Let us apply the framework using the following example of purchasing a product liability
insurance policy at an annual premium of $100,000:

1. Annual insurance premium $100,000


2. Annual management cost 20,000
3. Economic capital benefit5 60,000

5 The economic capital benefit represents a funding credit. It is used in matched-maturity funds transfer systems to
recognize the interest income from the investment of capital funds. In our example, we assume a funding rate of 3%.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 21
The PRM Handbook – III.C.2 Operational Risk Process Models

4. Net reduction in economic capital6 2,000,000

In this example, the ceded RAROC is 9% [(100,000 + 20,000 + 60,000) / 2,000,000]. This
represents the effective cost of risk transfer, which can be compared to the effective costs of
alternative risk transfer strategies as well as the cost of risk retention. For instance, if the
company’s cost of economic capital were 10%, then this transaction would add to shareholder
value because the ceded RAROC (cost of risk transfer) is below the cost of risk retention.

III.C.2.9 IT Outsourcing
IT outsourcing is widely considered one of the major business imperatives in today’s business
world, often described as a ‘mega-trend’. The META Group estimates that IT outsourcing is a
US$150 billion market, compared to $200–220 billion for software and hardware. More
importantly, the growth of IT outsourcing is expected to outpace that of software and hardware
for the next few years at least. Let us discuss how one might establish an operational risk process
for IT outsourcing based on the processes and tools discussed earlier.

III.C.2.9.1 Stakeholder Objectives


Buyers of outsourcing should first establish an overall outsourcing strategy. This strategy would
identify the IT systems and/or applications that are outsource candidates, the approach to
evaluating alternative outsource providers and solutions, and the decision processes and cost–
benefit analyses that will result in specific outsourcing transactions. This strategy should also
discuss how the outsourcing strategy will support the overall business strategy of the company,
including the specific business and financial objectives that outsourcing is expected to achieve.
Buyers of outsourcing services often cite the following expected benefits:
x cost savings;
x access to IT skills and advanced technologies;
x quality of services;
x resource allocation to core activities;
x scalability and flexibility in IT resources;
x shorter time to market;
x enhance e-business applications.

As a sign of more realistic expectations on the part of buyers, expected cost savings cited by
buyers have come down from 40–70% a few years ago to 10–20% today. However, even with
reduced expectations, outsourcing arrangements often fail to meet the buyers’ requirements. A

6 The net reduction in economic capital includes the gross reduction of economic capital for the risk exposure that is
being insured, minus the increase in counterparty and operational risk capital.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 22
The PRM Handbook – III.C.2 Operational Risk Process Models

2004 IT outsourcing study by DiamondCluster7 noted that 21% of buyers said that they had
prematurely terminated an outsourcing agreement in the last 12 months. The most common
reasons cited for cancelling outsourcing arrangements include:

x provider having financial difficulties (credit risk);


x provider failure to deliver on commitments (operational risk);
x buyer consolidation of outsourcing vendors (business risk).

In addition to these risks, buyers must address a complex set of significant operational risks –
such as geopolitical risk, regulatory compliance, data integrity and security, and reputational risk.
While project delays and cost overruns are the most common reported outcomes from these
risks, it might be useful to review a couple of more specific examples. In the area of information
security, a woman from Pakistan recently obtained sensitive patient information from the
University of California, San Francisco, Medical Center through a medical transcription
subcontractor that she worked for, and threatened to post the files on the Internet unless she was
paid more money. On the political side, there is a significant backlash against ‘exporting jobs’
through outsourcing. In response to political pressure, the state of New Jersey has passed
legislation to ban IT and business process outsourcing by state government.

Perhaps one of the most critical risks in IT outsourcing arrangements is the appropriate
alignment of objectives between the parties. Given the long-term nature of outsourcing
contracts, buyers must seek out a provider that not only offers the optimal technologies and
services, but also possesses compatible business culture, professional standards, and business
objectives.

III.C.2.9.2 Key Processes


The key processes associated with IT outsourcing include:

(i) Evaluation and selection of outsourcing provider. The process includes documentation of
requirements in a request for proposal (RFP), sending out RFPs and evaluating the responses,
and negotiating the contract and service level agreement (SLA). This step can take six months to
a year, and cost 2–5% of the annual cost of the contract.

(ii) Transitioning to the outsourcing environment. The transition period is perhaps the most challenging
stage of an outsourcing initiative. Steps include bringing the provider company professionals on-
site for training and knowledge transfer, transferring or terminating existing employees, and

7 DiamondCluster 2004 Global IT Outsourcing Study, available at www.diamondcluster.com.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 23
The PRM Handbook – III.C.2 Operational Risk Process Models

establishing the required infrastructure (hardware, software, communication protocols) and


operational processes. This step can take an additional three to twelve months, and cost 5–15%
of the annual contract.

(iii) Ongoing management of the outsourcing contract. On a day-to-day basis, the buyer must manage
work processes and communications, monitor provider performance, audit operations, perform
quality tests, and integrate the output with other in-house or outsourced IT operations. This
requires significant project management resources, and can cost 5–10% of the contract.
Additionally, the ongoing ‘costs of risk’ should be considered, such as incremental insurance
expense and economic capital costs.

Each of the above processes, especially ongoing management, should be fully documented in
policies and procedures and illustrated in process maps. As such, roles and responsibilities of
both parties are clearly established.

III.C.2.9.3 Performance Monitoring


As discussed earlier, performance and risk metrics should be developed as part of an ongoing
outsourcing review process. Performance goals and MAPs should be incorporated into SLAs
and a ‘scorecard’ should be developed to track actual performance against goals and MAPs. The
DiamondCluster study noted that 70% of buyers and 69% of providers monitor performance at
least monthly, with the following quantitative metrics as being the most common:
x on-time delivery; x time to process requests;
x cost effectiveness; x defect rates;
x end-user satisfaction; x standards compliance;
x timely, quality staffing; x size of request backlog.
x service availability;

In addition to the above performance metrics, the buyer should monitor the provider’s key
business and risk metrics, such as market share, earnings, stock price performance, debt rating
and financial ratios. Performance and risk monitoring should not be viewed as a reporting
exercise, but as a proactive process to ensure overall project success, which may include some of
the risk mitigation strategies discussed below.

III.C.2.9.4 Risk Mitigation


One of the first and most important steps in mitigating outsourcing risk is to fully evaluate the
economic costs and benefits at the onset, including consideration of all critical risk factors. As
noted in a survey of 500 human resources executives by Hewitt Associates, 92% of the firms had

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 24
The PRM Handbook – III.C.2 Operational Risk Process Models

moved jobs overseas to cut costs. However, less than half of those companies studied the tax
environments of the offshore country and only 34% considered the expense of shutting down
US facilities.

Let us take a simple example to see how an outsourcing contract should be evaluated. Suppose a
US company is considering moving one of its application development projects offshore to
China, which provides a tax-adjusted 50% labour arbitrage on a $10 million contract. The
following table shows the cost–benefit analysis for both a best case and worst case:

($ Thousands) Best Case Worst Case


1. Projected labour cost savings $5000 (50%) $5000 (50%)
2. Vendor selection – 200 (2%) – 500 (5%)
3. Transition costs – 500 (5%) – 1500 (15%)
4. Ongoing management – 500 (5%) – 1000 (10%)
5. Risk costs 8 – 1000 (10%) – 1500 (15%)
Net savings: $2800 (28%) $500 (5%)

The above example shows that the adjusted net savings range from 28% in the best case to only
5% in the worst case. Such an analysis should be performed for different outsourcing strategies
and various providers. Finally, the projected range of net savings should then be considered
against less tangible risks such as reputational risks and geopolitical risks.

In addition to performing a full cost–benefit analysis of outsourcing opportunities, other risk


mitigation strategies include:
x Developing a hybrid outsourcing strategy. Outsourcing experts suggest that the optimal blend of
in-house and outsourced IT resources is generally 20–30% in-house and 70–80%
outsourced. At a significant scale, even the outsourced component should be diversified
across different providers, countries and locations. Other considerations include the mix
between ‘nearshore’ operations (e.g., Canada, Mexico, in the case of a US firm) and offshore
operations (e.g., India, China), as well as the possibility of setting up captive outsourcing
companies.

x Taking an incremental outsourcing approach. Early outsourcing contracts were large and very
ambitious arrangements that often failed to live up to the buyer and/or provider
expectations. For buyers new to outsourcing or experienced buyers working with a new

8 Risk costs include incremental insurance expense for the outsourcing arrangement and the cost of incremental
operational risk capital. The latter should include exit cost, which is a function of the probability of project failure and
the cost of switching operations in-house or to another provider.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 25
The PRM Handbook – III.C.2 Operational Risk Process Models

provider, a more practical approach is to outsource incrementally to ensure a compatible


relationship in terms of expectations, corporate cultures, and technology and service
requirements. Once the initial work is performed at a satisfactory level, then the
outsourcing relationship can be expanded over time.

x Negotiating a flexible win–win contract. It is critical that the outsourcing contract is attractive to
both parties. For example, an arrangement that is overly favourable to the buyer might not
get the appropriate level of service and attention of the provider. However, the provider
should have ‘skin in the game’ to provide the right incentives for ongoing performance. For
example, the META Group estimates that by 2006, 35% of outsourcing contracts will adopt
output-based pricing instead of time and materials (input-based pricing). Also, given the
rapid changes in technologies, customer preferences and business requirements, contract
terms should incorporate sufficient flexibility so that they do not become obsolete
prematurely.

x Establishing exit strategies and contingency plans. Companies should establish exit strategies and
contingency plans in the event that the outsourcing contract expires, or the provider does
not deliver as expected, or business conditions require the termination of the contract.
These exit strategies and contingency plans should be developed in the early stages and with
the participation of the provider(s), because their cooperation will be needed to execute such
plans. Moreover, in the post September 11 world, contingency plans for disaster recovery
should be fully developed and tested by the provider and/or buyer.

x Developing a compelling stakeholder communication strategy. Outsourcing should continue to be a


sensitive political issue for the foreseeable future. Forrester Research estimates that over the
next 12 years, 3.3 million US jobs, accounting for $100 billion in wages, will move offshore.
Such numbers will likely fuel the current backlash. To minimize reputational risks,
companies should develop a well thought-out communication strategy with respect to their
outsourcing initiatives. Besides communicating to external groups such as customers,
unions, and governmental entities, internal communication is important given the potential
for disgruntled employees to undermine outsourcing initiatives.

To develop, coordinate and implement project management controls and the above risk
mitigation strategies, companies are putting in place centralized programme management offices
(PMOs) as governance structures. These PMOs take a portfolio approach in allocating resources
and monitoring vendor performance to ensure optimal performance. The PMOs also represent a
centre of excellence for project management skills and resources, including sourcing and
managing external resources such as consultants, tax experts and lawyers. Regardless of whether

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 26
The PRM Handbook – III.C.2 Operational Risk Process Models

a PMO is established, companies involved in, or planning to initiate, outsourcing arrangements


should establish the appropriate operational risk controls discussed in the chapter. Otherwise,
given the strategic importance of IT and outsourcing, the ‘next best thing’ might very well
become the company’s worst nightmare.

References
Cumming, C M, and Hirtle, B J (2001) The challenges of risk management in diversified financial
companies. Federal Reserve Bank of New York Economic Policy Review, March
.
Hernandez, J V, Sanchez, L M, and Ceske, R (2000) Quantifying event risk: the next
convergence. Journal of Risk Finance, 1(3), pp. 9–23.

Lam, J (2003a) A unified management and capital framework for operational risk. RMA Journal,
Feb., pp. 26–29.

Lam, J (2003b) Enterprise Risk Management – from Incentives to Controls. Hoboken, NJ: Wiley.

Copyright © 2004 J. Lam and The Professional Risk Managers’ International Association. 27
The PRM Handbook – III.C.3 Operational Value-at-Risk

III.C.3 Operational Value-at-Risk


Carol Alexander1

Many firms may wish to apply an ‘advanced measurement approach’ (AMA) to assess their
capital to cover operational risk. Under the new Basel Accord that comes into force at the end of
2006, banks will at first apply a ‘top-down’ method (either the ‘basic indicator’ or ‘standardised’
approach) to assess their operational risk regulatory capital. However, by the end of 2007 they
will be able to apply an AMA – and hopefully reduce their regulatory capital charge – provided
they meet certain qualitative and quantitative criteria. Rating agencies are another driving force
behind the implementation of AMA. Banks and corporates that aim for a high credit rating
require an accurate assessment of their operational risks to convince rating agencies that
capitalization is adequate. The ‘top-down’ methods provide only a crude estimate of operational
risk, based on the unrealistic assumption that operational risks increase proportionally with gross
income (see Section III.C.2.4.1).

Quantitative risk management requires an understanding of the ‘value-at-risk’ (VaR) models that
are used to assess market, credit and operational risk capital. Operational VaR modelling is the
subject of this chapter: Section III.C.3.1 outlines the ‘loss model’ approach to computing
operational risk capital (ORC). Sections III.C.3.2 and III.C.3.3 examine how to apply some
standard functional forms for the frequency distribution and severity distribution. Sections
III.C.3.4 and III.C.3.5 describe how each component of ORC is estimated using (a) analytic and
(b) simulation methods, and then Section III.C.3.6 explains how the component ORC estimates
are aggregated over all business lines and event types to obtain the total ORC estimate for the
firm. Section III.C.3.7 concludes

III.C.3.1 The ‘Loss Model’ Approach


The actuarial ‘loss model’ approach has recently become accepted by the industry as the generic
AMA for the determination of operational risk regulatory capital for the new Basel 2 Accord (see
Sections III.0.3 and III.C.1.7 for further details). Consequently, the loss model approach may also
be favoured by rating agencies for firms of high credit quality. But even without this external
pressure, many firms will want to adopt operational loss models as a key element of good risk
management practice.

1 Chair of Risk Management and Director of Research, ISMA Centre, Business School, University of Reading, UK.

Copyright © 2004 C. Alexander and The Professional Risk Managers’ International Association.
The PRM Handbook – III.C.3 Operational Value-at-Risk

Prior to implementing an AMA the firm must identify events that are linked to operational risks,
and map these events to an operational risk ‘matrix’ such as that based on the Basel 2
consultative documents (see Basel Committee on Banking Supervision, 2001) and which is shown
in Table III.C.3.1. Each element in the matrix defines an operational risk ‘type’ by its business
line and operational event category. In the AMA, operational risk capital is first assessed
separately for each risk type for which the AMA is the designated approach.2 Then the
component estimates are aggregated to obtain the total AMA operational risk capital for the firm.
The AMA could be chosen for only the most important risk types and this depends on the nature
of the business: that is, the definition of the important event types and business lines will be
specific to the firm’s operations. For example, operational losses for clearing and settlements
firms may be concentrated in processing risks and systems risks.

Table III.C.3.1: The operational risk matrix


Internal External Employment Clients, Damage Business Execution,
Fraud Fraud Practices & Products to Disruption Delivery &
Workplace & Physical & System Process
Safety Business Assets Failures Management
Practices
Corporate
Finance
Trading &
Sales
Retail
Banking
Commercial
Banking
Payment &
Settlement
Agency &
Custody
Asset
Management
Retail
Brokerage

The definition of business units and event types for the operational risk matrix can be specific to
the firm. It will be natural to follow pre-existing internal definitions of business units and to
define event types that capture the important operational risks. It should also take account of the
granularity that is required for the AMA calculations. Increasing levels of granularity are
necessary to include the impact of insurance cover, which may only be available for some event
types in certain lines of business.

2 Corporates are, of course, free to pick and choose which risk types they assess using AMA. Indeed, banks are also
afforded some flexibility: under the new Basel Accord they can choose the AMA for some risk types and apply the
standardized approach to others. However, once a risk type has been chosen for AMA modelling, the bank will not be
allowed in future to apply a more basic risk capital assessment method.
Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 2
The PRM Handbook – III.C.3 Operational Value-at-Risk

It is desirable to isolate those elements of the matrix that are likely to be dominant in the final
aggregation, as these risk types should be the main priority for risk control. Unfortunately, these
can be precisely those risks for which the data are very subjective, consisting of expert opinions
or risk self-assessments that have a large element of uncertainty. Somehow, this uncertainty in
the data must be included in the risk model. Qualitative judgements must be translated into
quantitative assessments of risk capital using appropriate statistical methodologies. Many firms
now aim to do this through a ‘risk self-assessment’ process. A risk self-assessment gives a
forward-looking, subjective estimate of the loss model parameters. Risk self-assessment programs
can be facilitated in the same way as control self-assessments (see Section III.C.1.9).

Operational risks may be categorised in terms of:


x frequency, the number of loss events during a certain time period; and
x severity, the impact of the event in terms of financial loss.

Risks with very low frequency and high severity, such as a massive fraud or a terrorist attack,
could jeopardise the whole future of the firm. These are the risks associated with losses that will
lie in the very upper tail of the total loss distribution. Risk capital is not really designed to cover
these risks. However, they might be insurable. Risks with high frequency and low severity, which
include credit card fraud and processing risks, can have a high expected loss but will have relatively
low unexpected loss. That is, the range of loss outcomes is relatively narrow. If expected losses for
the high-frequency, low-severity risks are covered by the general provisions of the business, the
implication is that ORC requirements for these risk types will be relatively low. If this is not the
case, then expected losses should be included in the risk capital. Unless expected losses are very
high, the risk capital will still be lower than that for medium-frequency, medium-severity risks.
These latter risks are the legal risks, the minor frauds, the fines from improper practices, the large
system failures and so forth. In general, these should be the main focus of the AMA.

An example of loss data is shown in Table III.C.3.2. For simplicity and only for the purposes of
this illustration, the exact date of the loss is not actually recorded, only the quarter into which it
falls. This allows one to classify data by quarterly frequency – or by semi-annual or annual
frequency – but not by monthly or lower frequencies. Suppose we choose the quarterly period, so
the frequency distribution will be of the number of loss events per quarter. First we ignore the
severity data, and consider only the dates of the loss events. There were no quarters in which no
loss events occurred, no quarters in which 1 loss event occurs, but there was one quarter in
which 2 loss events occurred (2001:Q3) and two quarters in which 3 loss events occurred
(2000:Q1 and 2000:Q3). Continuing counting in this way, we can draw an empirical frequency

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 3
The PRM Handbook – III.C.3 Operational Value-at-Risk

density. Secondly, returning to the example loss data, we now ignore the date of loss, consider
only the loss amounts and hence construct the empirical severity density.

Table III.C.3.2: Example of historical loss experience data

Date Loss (£000) Date Loss (£000) Date Loss (£000)


2000:Q1 4.45 2001:Q1 7.51 2002:Q1 1.12
2000:Q1 13.08 2001:Q1 1.17 2002:Q1 4.06
2000:Q1 29.38 2001:Q1 1.35 2002:Q1 34.55
2000:Q2 25.92 2001:Q1 105.45 2002:Q1 10.24
2000:Q2 39.10 2001:Q1 37.24 2002:Q1 24.17
2000:Q2 12.92 2001:Q1 16.55 2002:Q1 11.01
2000:Q2 1.24 2001:Q2 7.34 2002:Q1 3.89
2000:Q3 8.01 2001:Q2 1.35 2002:Q1 187.50
2000:Q3 12.17 2001:Q2 1.50 2002:Q1 13.21
2000:Q3 13.88 2001:Q2 1.19 2002:Q1 4.49
2000:Q4 53.37 2001:Q2 2.80 2002:Q2 2.10
2000:Q4 5.89 2001:Q3 3.00 2002:Q2 2.20
2000:Q4 1.32 2001:Q3 6.82 2002:Q2 2.31
2000:Q4 7.11 2001:Q4 1.73 2002:Q2 25.00
2001:Q4 231.65 2002:Q2 3.81
2001:Q4 5.00 2002:Q3 1.48
2001:Q4 3.10 2002:Q3 1.57
2001:Q4 26.45 2002:Q3 20.33
2001:Q4 12.62 2002:Q3 43.78
2001:Q4 2.32 2002:Q3 3.62
2001:Q4 71.12 2002:Q3 45.72
2001:Q4 1.73 2002:Q4 142.59
2002:Q4 20.73
2002:Q4 31.96
2002:Q4 55.60

For some real loss experience data based on a record going back over 24 months of operational
loss events, Figure III.C.3.1 illustrates the empirical frequency density (the number of loss events
per month). It shows that, out of the 24 months, there was only one month in which less than
10 loss events occurred; there were five months for which between 10 and 19 loss events
occurred; three months for which between 20 and 29 loss events occurred, and so forth.

Figure III.C.3.1: Example of (monthly) frequency distribution

Frequency
6

5
Number of Months

0
1 10 20 30 40 50 60 70 80 90 100 110 120 130 140
Number of Loss Events

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 4
The PRM Handbook – III.C.3 Operational Value-at-Risk

Figure III.C.3.2 illustrates the empirical severity density obtained from the same data. Only losses
in excess of €1000 are recorded, so the severity data are truncated at the lower end. Special
methods need to be applied to detruncate the severity data so that the full severity density can be
recovered.

Figure III.C.3.2: Example of loss severity distribution

Severity
0.08

0.07

0.06

0.05
Probability

0.04

0.03

0.02

0.01

0
0 20 40 60 80 100 120
Loss Given Event

In summary, for a given operational risk type we construct a discrete frequency density h(n) of the
number of loss events n per period, and a continuous density g(l ) representing the loss severity,
L. The density function for the total loss distribution f(x) is then given by compounding these two
densities, usually under the assumption that loss frequency and loss severity are independent.
More details of the method for compounding these densities will be given in Section III.C.3.5
below.

Figure III.C.3.3 gives a diagrammatic representation of the relationship between the total loss
distribution and its underlying frequency and severity distributions. The basic time period for the
frequency density here is one year, and in this case the total loss distribution is also called the
‘annual’ loss distribution. This is a convenient terminology because later we need to aggregate
several ‘total’ loss distributions over different risk types into a ‘total total’ loss distribution.
However, by referring to each component as an ‘annual’ loss distribution (or a ‘quarterly’ loss
distribution if the frequency is measured at quarterly intervals, or ‘monthly’ and so on) there is no
confusion about what is meant by the ‘total’ loss distribution.

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 5
The PRM Handbook – III.C.3 Operational Value-at-Risk

Figure III.C.3.3: Representation of the loss model

Unexpected Loss

Expected Loss 99.9th percentile Loss

Frequency Distribution Severity Distribution

No. Loss Events Per Period Loss Given Event

Marked on the density function for the total loss distribution are the
x expected loss, that is, the mean of this distribution; and the
x unexpected loss at the 99.9th percentile, that is, the unexpected loss at the ơth percentile is the
difference between the upper ơth percentile and the mean of the annual loss distribution

ORC is held to cover all losses, other than highly exceptional losses, that are not already covered
by the normal cost of the business. The definition of what one means by ‘highly exceptional’
translates into the definition of a percentile of the loss distribution, such that only losses
exceeding this amount are ‘highly exceptional’. One should attempt to control these, using
scenario analysis. As already mentioned, often expected losses are already included in the normal
cost of business. For example, the expected loss from credit card fraud may be included in the
balance sheet under ‘operating costs’. In that case ORC should cover only the unexpected loss at
some predefined percentile of the loss distribution.

The ORC is thus defined using the VaR metric, for some risk horizon (defined below) and at
some percentile. The Basel Committee recommend 99.9% and a one-year horizon for the
calculation of ORC in banks; internally, for companies wishing to maintain a high credit rating, it
is common to use an even higher percentile that is consistent with their desired credit rating. For
instance, a firm that targets a AA rating will typically be measuring economic capital at the
99.97th percentile.

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 6
The PRM Handbook – III.C.3 Operational Value-at-Risk

III.C.3.2 The Frequency Distribution


The total operational loss refers to a fixed time period over which these events are to be
observed. This time period is usually called the risk horizon of the loss model to emphasise that it
is a forward-looking time interval starting from today. For regulatory purposes, both operational
and credit risk horizons are set at one year, but for internal purposes it is also common to use risk
horizons of less than or more than one year. For ease of exposition we shall henceforth only
refer to the one-year horizon – hence the AMA aims to model the annual loss distribution.

Having defined a risk horizon, the probability of a loss event, which has no time dimension, can
be translated into the loss frequency, that is, the number of loss events occurring during the risk
horizon. In particular, the expected loss frequency, denoted lambda (ƫ), is the product of the
expected total number of events, N, during the risk horizon, including events for which no
operational loss was made, and the expected loss probability, p:
ƫ = Np. (III.C.3.1)

Sometimes it is convenient to forecast ƫ directly – this is the case when we cannot quantify the
total number of events N and we only observe loss events – and in other cases it is best to
forecast N and p separately – for example, N could be the target number of transactions over the
next year, and in that case it is p, not ƫ, that we should attempt to forecast using loss experience
and/or risk self-assessment data.

Loss frequency (the number of loss events occurring during the risk horizon) is a discrete
random variable: it can only take the values 0, 1, 2, … , N. A fundamental density for such a
discrete random variable (which nevertheless is only appropriate under certain assumptions)3 is
the well-known binomial density (see Section II.E.4.1),

§N ·
h ( n ) ¨ ¸ p n (1  p )N n n = 0, 1, … , N. (III.C.3.2)
©n ¹

The binomial frequency is only used when a value for N can be specified. For example,
x for Trading & Sales/Client, Products and Business Practice Risk, N could be the target
number of deals over the next year;
x for Retail Banking/External Fraud, N could be the total number of credit cards in
issuance during the risk horizon.4

3 We must assume that the probability of a loss event is the same for all the events in this risk type, and therefore equal
to p; and that operational events are independent of each other.
4 Note that here N would be so large that the Poisson distribution can be used instead of the binomial distribution.
However, firms may still wish to employ the binomial distribution when, for instance, their target for N over the next
year is quite different from the historical value of N.
Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 7
The PRM Handbook – III.C.3 Operational Value-at-Risk

It is not always possible to specify N. However, since p is normally small the binomial
distribution can often be well approximated by the Poisson distribution, which has the single
parameter ƫ the expected frequency, as in (III.C.3.1) above. Incidentally, ƫ is also equal to the
variance of the Poisson distribution. The Poisson distribution has the density function (see
Section II.E.4.2)
ƫn exp(  ƫ)
h( n ) n = 0, 1, 2, … . (III.C.3.3)
n!

If the empirical frequency density is not well modelled by a Poisson distribution – for example,
one could equate the mean frequency observed empirically with ƫ, but then find that the sample
variance is significantly different from ƫ – an alternative, more flexible functional form is the
negative binomial distribution, with density function
ơ n
§ ơ  n  1· § 1 · § Ƣ ·
h( n ) ¨ ¸¨ ¸ ¨ ¸ n = 0, 1, 2, … , (III.C.3.4)
© n ¹© 1  Ƣ ¹ © 1  Ƣ ¹

which has mean ơƢ and variance ơƢ2.

There is absolutely no point in applying a statistical test to decide which of the frequency
distributions provides the closest fit to loss data: the binomial is only applicable when a ‘number
of events’ can be quantified (and is small) and the negative binomial has two parameters so it will
always fit better than the Poisson. However, this does not imply that one should always choose
the negative binomial as the frequency functional form. In contrast to market and credit risk, for
operational risk the precise fitting of data by choosing the best functional form is not a main
source of model risk. In fact the model risk arising from inappropriate and/or ad hoc methods
when handling the data is a much more important source of operational VaR model risk.5

The choice of functional form for the frequency distribution should depend on both the type of
data and the source(s) of the data – internal and/or external loss data and/or risk self-
assessments. It is very difficult to design psychologically meaningful questions in a risk self-
assessment that are compatible with a negative binomial frequency. On the other hand, the
question ‘what is the expected number of loss events next year?’ will directly invoke the
parameter ƫ for the Poisson distribution. Also, it is difficult to apply the binomial distribution to
external consortium data because the consortium will not normally be recording a value of N for
each bank. These restrictions imply that it is often the Poisson distribution that is used in
practice.

5 But of course, in common with market and credit risk, the main model risk is ‘aggregation risk’. This stems from
making inappropriate assumptions regarding dependencies when aggregating risks. This has by far the most influence
on the total VaR estimate. See, for instance, Chapter III.A.3.
Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 8
The PRM Handbook – III.C.3 Operational Value-at-Risk

Example III.C.3.1: Estimating a Poisson frequency from historical data


(i) High-frequency risk type. Suppose historical loss events give the following data on just the
number of loss events recorded each month, over the last 2 years:

Month 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Number of Loss Events 20 13 24 26 25 21 17 13 21 30 16 24 31 20 19 21 14 14 15 18 16 21 22 19

The total number of loss events recorded was 480, so the average number of loss events per
month is 20. The monthly frequency distribution is therefore estimated as a Poisson distribution
with ƫ = 20. The density function is shown in red in Figure III.C.3.4.

(ii) Low(er)-frequency risk type. Suppose historical loss events give the following data on just the
number of loss events recorded each month, over the last 2 years:

Month 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Number of Loss Events 0 12 3 0 8 4 10 1 0 9 2 10 3 5 3 5 1 7 4 7 10 5 7 4

The total number of loss events recorded is now only 120, so the average number of loss events
per month is 5. The monthly frequency distribution is therefore estimated as a Poisson
distribution with ƫ = 5. The density function is shown in blue in Figure III.C.3.4.

Figure III.C.3.4: Poisson frequency densities for high-frequency and low-frequency risks

0.2

0.18

0.16 Lam bda = 5


Lam bda = 20
0.14
P roba bility

0.12

0.1

0.08

0.06

0.04

0.02

0
1 Number of loss events per month

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 9
The PRM Handbook – III.C.3 Operational Value-at-Risk

Notes:
1. The Poisson annual frequency density is obtained by multiplying the estimated lambda
from monthly data (20 and 5 in the above example) by 12.
2. Lower-frequency risks have more skewed and leptokurtic frequency densities than high-
frequency risks. This property influences the compound distribution, so that the annual
loss distribution will also be highly skewed and leptokurtic for the low-frequency, high-
severity risks.

III.C.3.3 The Severity Distribution


Now consider how to ‘fit’ a severity distribution, such as that shown in Figure III.C.3.2. Various
functional forms are available for continuous random variables like severity, as described in
Chapter II.E. The lognormal distribution for loss severity, L, has the density function:

1 § 1 § ln l  µ · 2 ·
exp ¨  ¨
¨ 2 © Ƴ ¸¹ ¸¸
g(l ) (l > 0) . (III.C.3.5)
2ưƳ l © ¹

Here the log of the loss severity (or log severity for short, denoted ln L) is normally distributed with
mean µ and variance Ƴ. This is a very common distribution in financial mathematics, and more
details are given in Section II.E.4.5.

High-frequency risks can have severity distributions that are relatively lognormal, but low-
frequency risks can have severity distributions that are too skewed and leptokurtic to be well
captured by the lognormal density function. Common choices, therefore, are the gamma density,

l ơ 1 exp l /Ƣ
g(l ) (l > 0), (III.C.3.6)
Ƣơ Ƅ(ơ)

where Ƅ(.) denotes the gamma function, and the two-parameter hyperbolic density,

g(l )

exp ơ Ƣ2  l 2 (l > 0), (III.C.3.7)
2ƢB(ơƢ)

where B(.) denotes the Bessel function of the first kind. On first sight these may look daunting,
but Excel does have in-built functions for these common distributions (with the obvious names).
Other functional forms for the severity that have been considered by some banks include the
generalised hyperbolic, lognormal mixtures, and general mixture distributions.

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 10
The PRM Handbook – III.C.3 Operational Value-at-Risk

Again, there is absolutely no point in applying a statistical test to decide which of these frequency
distributions provides the closest fit to loss data: the generalised four-parameter hyperbolic
distribution will always fit best. However, this does not imply that one should always choose the
generalised hyperbolic distribution as the severity functional form. Again, this choice again
depends on both the type of data and the source(s) of the data – internal and/or external loss
data and/or risk self-assessments. For example, it is very difficult to design a risk self-assessment
that is compatible with any severity density having more than two parameters.

In some databases, for instance those constructed from public, ‘newsworthy’ events, only very
extreme losses are recorded. There is an implicit high threshold for the losses included in the
database and thus some analysts have attempted to model the severity of these losses using the
generalised Pareto and other distributions from the class of distributions in ‘extreme-value
theory’ (EVT). Whilst EVT may have found useful applications to high-frequency, tic-by-tic
financial market data, one should not forget that it was introduced (almost 50 years ago) to model
the distributions of extreme values in repetitive, independent and identically distributed
processes, such as those observed in the physical sciences; see, for instance, Gumbel (1958) and
Embrechts et al. (1991). To attempt to fit a generalised Pareto distribution, or any other extreme-
value distribution, to the sparse and fragmented data that are available for very large operational
losses is, in my view, a triumph of hope over reason. Nevertheless, some AMAs are currently
attempting to incorporate extreme-value densities, and readers should therefore have some
understanding of them.

The ‘peaks-over-threshold’ (POT) model applies when losses over a high and predefined
threshold u are recorded. The distribution function Gu of the excess losses, X – u, has a simple
relation to the distribution F(x) of the loss severity X. In fact
Gu(y) = prob(X – u < y ° X > u) = [ F(y + u) – F(u) ] / [ 1 – F(u) ]. (III.C.3.8)

For many choices of underlying distribution F(x) the distribution Gu(y) will belong to the class of
generalised Pareto distributions (GPDs) given by:

1 – exp(–y/Ƣ) if Ʈ = 0,  
Gu(y) = (III.C.3.9)
1 – (1 + Ʈy/Ƣ)–1/Ʈ if Ʈ z 0.

The parameters Ƣ and Ʈ will depend on the type of underlying loss distribution F(x) and on the
choice of threshold u. Some generalised Pareto densities for different values of Ƣ and Ʈ are shown
in Figures III.C.3.5 and III.C.3.6. Note that the primary effect of increasing Ƣ is to increase the

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 11
The PRM Handbook – III.C.3 Operational Value-at-Risk

range of the density (Figure III.C.3.5), and that Ʈ is called the ‘tail index’ precisely because as Ʈ
increases so does the weight in the tails of the GPD (see Figure III.C.3.6).

Figure III.C.3.5: Generalised Pareto densities (Ʈ = 1)

0.1
0.09
0.08
0.07 Ƣ=1
0.06
0.05
0.04
0.03
0.02 Ƣ=5
Ƣ=2
0.01
0
0 20

Figure III.C.3.6: Generalised Pareto densities (Ƣ = 1)

0.1
0.09
0.08
0.07
0.06
0.05 Ʈ=0
0.04
0.03 Ʈ = 0.9
0.02
0.01
0
0 20

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 12
The PRM Handbook – III.C.3 Operational Value-at-Risk

III.C.3.4 The Internal Measurement Approach


Under certain assumptions (which are rather strong, but nevertheless appear to be admissible by
regulators), there are simple analytic formulae for the expected loss and the unexpected loss in
the annual loss distribution. These formulae are based on what the Basel Committee has called
the ‘internal measurement approach’ (IMA). The basic formula for the IMA risk capital
calculation given in the proposed Basel 2 Accord is:
ORC = gamma u expected annual loss = ƣ u NpL, (III.C.3.10)

where N is a volume indicator (a proxy for the number of operational events), p is the expected
probability of a loss event, L is the loss given event, and ƣ, ‘gamma’, is a multiplier that depends
on the operational risk type.

Note that NpL only corresponds to the expected annual loss when the loss frequency is
binomially distributed and the loss severity is not regarded as a random variable. More generally,
with a Poisson frequency distribution having expected frequency ƫ Np):
ORC = ƣ u ƫ u L. (III.C.3.11)

Note that a very strong assumption of the IMA is that each time a loss is incurred, exactly the same
amount is lost (within a given risk type). Introduction of severity uncertainty, as in the ‘loss
distribution approach’ (LDA) described below, will always increase the ORC, often by a factor of
5 or more. Thus the IMA provides only a useful benchmark, a lower bound for the operational
risk capital calculated using the full simulation method that we shall describe presently.

I have shown elsewhere (Alexander, 2003, p. 151) how the value of ƣ in (III.C.3.11) can be
calculated and provide statistical tables for ƣ: it only depends on the percentile and the expected
loss frequency. Instead of following Consultative Paper 2.5 (Basel Committee, 2001) and writing
unexpected loss as a multiple (ƣ) of expected loss, I write unexpected loss as a multiple phi (ƶ) of
the loss standard deviation. That is,
ORC = ƶ u standard deviation of annual loss. (III.C.3.12)

Recall that ORC is measured by a VaR metric, and is either unexpected loss at the 99.9th
percentile or the 99.9th percentile itself, the latter being the case when expected losses are not
already provisioned. Thus (III.C.3.12) can be rewritten as:

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 13
The PRM Handbook – III.C.3 Operational Value-at-Risk

ƶ = [(99.9%-ile  mean)/standard deviation] if expected losses are provisioned,


(III.C.3.13)
ƶ = [(99.9%-ile /standard deviation)] otherwise.

This formulation shows that, once we have estimated ƫ (either from a risk self-assessment or
from loss data, as explained in Section III.C.3.2) it is easy to calibrate ƶ. Because loss severity is
assumed to be the same amount, L, every time a loss is made, all the randomness in the annual
loss distribution comes only from the frequency distribution. Hence all quantities on the right-
hand side of (III.C.3.13) – the 99.9th percentile, the mean and the standard deviation – are just L
times the respective quantities of the frequency density. Thus L cancels, and the right-hand side
of (III.C.3.13) really just refers to the 99.9th percentile, the mean and the standard deviation of
the frequency density.

Having obtained ƶ we can obtain ƣ as follows: compare (III.C.3.10) and (III.C.3.12). Since ORC
is the same in both, equating them gives:
ƣ u expected annual loss = ƶ u standard deviation of annual loss

so ƣ is a multiple of ƶ: in fact
ƣ = ƶu (standard deviation/mean).

The same argument as above (i.e. that L cancels) implies that ƣ and ƶ are related through a factor
(standard deviation/mean) of the frequency density. For instance, when the frequency is Poisson,
which has mean and variance ƫ,
ƣ = ƶ —ƫ.      (III.C.3.14)

In this way, we construct statistical tables for the gamma factor in the IMA, a different table for
each choice of frequency distribution. Table III.C.3.3 gives the Poisson gamma tables (ƶ is also
shown).

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 14
The PRM Handbook – III.C.3 Operational Value-at-Risk

Table III.C.3.3: The ‘gamma’ in the IMA (Poisson frequency)

ƫ o 100 50 40 30 20 10
99.9%-ile 131.805 72.751 60.452 47.812 34.714 20.662
ƶ 3.180 3.218 3.234 3.252 3.290 3.372
ƣ 0.318 0.455 0.511 0.594 0.736 1.066

ƫo 8 6 5 4 3 2
99.9%-ile 17.630 14.449 12.771 10.956 9.127 7.113
ƶ 3.405 3.449 3.475 3.478 3.537 3.615
ƣ 1.204 1.408 1.554 1.739 2.042 2.556

ƫo 1 0.9 0.8 0.7 0.6 0.5


99.9%-ile 4.868 4.551 4.234 3.914 3.584 3.255
ƶ 3.868 3.848 3.839 3.841 3.853 3.896
ƣ 3.868 4.056 4.292 4.591 4.974 5.510

ƫo 0.4 0.3 0.2 0.1 0.05 0.01


99.9%-ile 2.908 2.490 2.072 1.421 1.065 0.904
ƶ 3.965 3.998 4.187 4.176 4.541 8.940
ƣ 6.269 7.300 9.362 13.205 20.306 89.401

Source: Alexander (2003). Table reproduced by kind permission of Pearson Education (Financial Times-Prentice
Hall).

For instance, when ƫ = 100 (i.e. a very high-frequency risk) the 99.9th percentile of the Poisson
distribution is 131.805. Thus, for a capital charge based only on unexpected loss, not on expected
loss, we have

ƶ = (131.805 – l00)/ —100 = 31.805/10 = 3.1805,

ƣ 3.1805/ 100 0.31805 .

Note that ƶ does not change much with ƫ, but that ƣ does. For low-frequency risks ƣ is very large
indeed, but for high-frequency risks it is very low. As ƫ increases above 100, for very high-
frequency risks, the frequency distribution approaches the normal distribution so ƶ tends to a
lower limit of 3.09 (this is the 99.9th percentile in the standard normal distribution); however, ƣ
tends to a lower limit of zero!

Also note that the ORC should increase as the square root of the expected frequency. For example,
in the Poisson frequency, combining (III.C.3.11) with (III.C.3.14) gives ORC = ƶ u —ƫ u L, and
we know that ƶ is around 3 or 4, unless the expected number of loss events is less than 1 every
50 years. So in the AMA the ORC will not be linearly related to the size of the bank’s operations,

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 15
The PRM Handbook – III.C.3 Operational Value-at-Risk

as it is under the basic indicator or standardised approach. Hence doubling the size of one’s
operations should only lead to an increase of —2 in the capital charge; this may be an incentive for
banks to use the AMA when they are permitted to do so at the end of 2007. The ORC also is
linearly related to loss severity – yet another reason why high-severity risks (which are by
definition also low-frequency) will attract higher capital charges than low-severity risks; the
following two examples illustrate this point.

Example III.C.3.2: Credit card operational risk


Question:
50,000 credit cards will be in issuance during the forthcoming year and the expected probability of a credit card
fraud on any card is 0.05. If every fraud gives rise to an operational loss of €1000, calculate (a) the expected loss,
(b) the ORC at the 99.9th percentile, with and without the assumption that the expected loss is already covered by
the normal cost of the business.
Answer:
(a) ƫ = 50,000 u 0.05 = 2500 so expected loss = 2500 u 1000 = €2.5 million.
(b) For such a high ƫ we have ƶ= 3.1 and ORC is estimated using (III.C.2.10).
(c) Standard deviation of loss is — ƫL = —2500 u 1000 = €50,000 and so
ORC = 3.1 u 50,000 = €155,000 (assuming expected loss is provisioned for),
ORC = 155,000 + 2.5 million = €2.655m (if expected loss must be included).

This example shows that for high-frequency, low-severity operational risks the expected loss is
much greater than the unexpected loss. The expected losses are normally already provisioned in
the normal cost of being in the credit card business. In that case the ORC is there to cover the
unexpected losses. The ORC for this type of risk will therefore be very small, and this risk type
will have little impact on the total ORC for a retail bank. Indeed, the next example shows that the
total ORC will be dominated by the low-frequency, high-severity operational risks:

Example III.C.3.3: Transactions errors versus internal fraud


Show that the expected loss is the same in the following two cases, but that the ORC is far greater for the internal
fraud:
Case A: 30,000 transactions are processed in the back office and the expected probability of a human error giving
rise to an operational loss is 0.01. Each time a loss occurs the operational loss is €1000.
Case B: 60 deals are made in Corporate Finance and the probability of internal fraud resulting in any one of these
making an operational loss is 0.0005. However, if such a loss is made, it will amount to €10m.

The ORC calculations based on (III.C.2.9) in Table III.C.3.4 below assume that expected loss is
provisioned for; clearly, the same basic observation holds whether or not the additional sum of

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 16
The PRM Handbook – III.C.3 Operational Value-at-Risk

€300,000 is added to the ORC figures. That is, even if the ORC estimates were €0.35369m and
€7.22474m respectively, the low-frequency, high-severity risk type is the one that totally
dominates the total ORC.

Table III.C.3.4: Results for Example III.C.3.3


Case A Case B
N 30,000 60
p 0.01 0.0005
L 1,000 10,000,000
Expected Loss (NpL) 300,000 300,000
Standard Deviation (= —(Np)L) 17,320.51 1,732,051
ƫ (= Np) 300 0.03
ƶ 3.1 3.998
ƣ (= ƶ/—ƫ) 0.178979 23.08246
ORC (€m) 0.05369 6.92474

III.C.3.5 The Loss Distribution Approach


The standard assumption that frequency and severity are independent may not be very realistic, but
it makes the construction of the compound distribution very simple. Monte Carlo simulation (see
Section II.E.4.3) is used to generate the total loss distribution as the compound of the frequency
and severity distribution. The simulation algorithm is as follows:
1. Take a random draw from the frequency distribution: suppose this simulates n loss
events per period.
2. Take n random draws from the severity distribution: denote these simulated losses by L1,
L2, …Ln.
3. Sum the n simulated losses to obtain a total loss X = L1 + L2 + …+ Ln.
4. Return to step 1, and repeat several thousand times: thus obtain X1, … , XM where the
number of simulations M is very large.
5. Form the histogram of X1, … , XM: this represents the simulated total loss distribution.
6. The ORC for this risk type is then the difference between the 99.9th percentile and the
mean of the simulated annual loss distribution, or just the 99.9th percentile, if expected
losses are not provisioned for elsewhere.

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 17
The PRM Handbook – III.C.3 Operational Value-at-Risk

Figure III.C.3.7 illustrates the first two steps in the simulation algorithm.6

Figure III.C.3.7: Simulating the annual loss distribution

1 1

Severity CDF

n random draws
Random draw

Frequency CDF

0 0
n L 2 …L n …L 1

Total annual loss from one simulation = 6 Li = TL


Repeat to obtain TL 1 ,……,TL 10000
ORC Estimate = Percentile (TL 1 ,.,TL 10000 , 0.999) – Average (TL 1 ,.,TL 10000 )

Elsewhere (Alexander, 2003, Ch. 7) I have compared the LDA estimate of ORC with the result
of applying the IMA formula of Section III.C.3.4. I considered two cases:
x Without an assumption of loss severity uncertainty in the LDA, the result will be identical
to the IMA estimate, provided enough simulations are used in the LDA calculation.
x The IMA estimate can be modified to include the assumption of loss severity uncertainty,
in which case it is multiplied by the factor —(1 + r 2), where
r = (severity standard deviation/severity mean). (III.C.3.15)
The resulting ‘modified’ IMA estimate is similar to the LDA estimate.

This shows that operational risk capital will increase more or less in proportion with the severity standard
deviation. So when loss severity becomes more uncertain, this has a direct, almost linear impact on
the capital charge. A very strong conclusion can be drawn. That is, for any given risk type: the
LDA ORC is always far greater than the ORC calculated from an analytic formula based on the
assumption that severity is non-random (and this includes the IMA formula). This difference will
be most pronounced for operational risks where the loss severity is highly uncertain (so that r in

6 The use of empirical frequency and severity distributions is not advised, even if sufficient data are available to
generate these distributions. There are two reasons for this. Firstly, the simulated annual loss distribution will not be an
accurate representation if the same frequencies and severities are repeatedly sampled. Secondly, there will be no ability
to carry out scenario analysis in the model unless one specifies and fits the parameters of a functional form for the
severity and frequency distributions.
Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 18
The PRM Handbook – III.C.3 Operational Value-at-Risk

(III.C.3.15) is large): for these risk types, the risk capital calculation based on the LDA can easily
give an ORC estimate that is 10 or 20 times larger than the IMA estimate.

III.C.3.6 Aggregating ORC


Having estimated the ORC for each ‘risk type’ for which the AMA is the designated approach,
we must now ‘add up’ these ORC estimates to obtain the total ORC for the firm. The aggregation
of risks – not just operational risks – is currently a hot topic of research. It is very difficult to
aggregate risks (a) when they are assessed using a VaR metric at an early stage;7 (b) because the
‘total’ we get for the risks is very much influenced by the assumptions we make about the
dependencies between the risks; and (c) because dependencies between risks are very difficult to
assess. Thus risk aggregation is very difficult even within market and credit risks – it is a very
thorny issue in operational risk!

Regarding dependency assumptions, the Basel Committee (2001) has stated: ‘The bank will be
permitted to recognize empirical correlations in operational risk losses across business lines and
event types, provided that it can demonstrate that its systems for measuring correlations are
sound and implemented with integrity’.

Dependencies between operational risks are common.8 Indeed, they occur whenever two
operational risk types share a common key risk driver (see Section III.C.2.3). For instance, risk
drivers associated with ‘human’ risks – such as pay, training, management, workload – affect
many types of operational risks, including employment practices, transactions processing, legal
risks, and fraud.

Often, when aggregating risks, particularly when the VaR metric is applied, banks will make just
two simple assumptions:
x Full dependency: This implies risks should simply be added to obtain the total risk; this is
an approximate upper bound for the total risk and is often used for regulatory risk
capital calculations (to err on the conservative side).
x No dependency: This is the assumption of ‘independence’. It implies that the total risk is
the square root of the sum of the squares of the component risks. For aggregating
different types of operational risks, this will give an approximate lower bound for the

7 Unlike the ‘variance’ risk metric, percentiles obey no simple standard rules. It is easy to aggregate variances (and to
allow for correlation in the process) but it is not so simple to aggregate VaR.
8 I prefer not to use the term ‘correlation’ – instead I use the term ‘dependency’. Operational risks are likely to be
dependent, in the sense that if one changes then so will the other, but they are not ‘correlated’. Correlation is a metric
that refers to ‘jointly stationary’ random variables with elliptical distributions like the multivariate normal (see Section
II.E.4.7) but there is no evidence that operational losses behave in this way.
Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 19
The PRM Handbook – III.C.3 Operational Value-at-Risk

total risk capital since it is unlikely that there will be many large negative dependencies
between different operational risks.

The next example shows that small changes in ‘correlation’ between operational risk types will
have a huge effect on the total risk estimate: for example, the total risk capital based on the
aggregation of only two operational risk types can easily be doubled – or halved – depending on
the assumption made about their dependency.

Example III.C.3.4: Effect of ‘correlation’ on risk aggregation


Consider the two annual loss distributions with density functions shown in Figure III.C.3.8.
Figure III.C.3.9 shows the total loss under correlations of Ʊ = 0.5, 0, –0.5, respectively. Then
Table III.C.3.5 shows that the expected loss is hardly affected by the assumptions made about
co-dependencies of these two risks: it is approximately 22.4 in each case. However the
unexpected loss at the 99.9th percentile (and at the 99th percentile) is very much affected by the
assumption one makes about dependency.

Table III.C.3.5: Risk capital estimates under different correlation assumptions

Ʊ = –0.5 Ʊ=0 Ʊ = 0.5


Expected
Loss 22.3909 22.3951 22.3977
99.9th
Percentile 41.7658 48.7665 54.1660
Unexpected
Loss 19.3749 26.3714 31.7683

Of course, the values of the correlation parameter were chosen arbitrarily in Example III.C.3.4.
But it has shown that small changes in correlation can produce estimates of total operational risk
capital that is doubled – or halved – even when aggregating only two annual loss distributions.
Obviously the effect of correlation assumptions on the aggregation of many annual loss
distributions to the total annual loss for the firm will be quite enormous.

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 20
The PRM Handbook – III.C.3 Operational Value-at-Risk

Figure III.C.3.8: Two annual loss densities9

0.15

0.12

0.09

0.06

0.03

0
0 5 10 15 20 25

Figure III.C.3.9: The total loss distribution under different assumptions for correlation

0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0 20 40 60 80

rho = 0 rho = 0.5 rho = - 0.5

III.C.3.7 Concluding Remarks


Operational risks often show a positive dependency because they experience the same directional
influence from common key risk drivers. These drivers can be linked to ‘human’ risks, ‘systems’
risks and so forth. Thus the total risk estimate will be somewhere between a lower bound given
by the square root of the sum of the squared risk estimates and an upper bound given by the sum
of the risk estimates. These two bounds are usually far apart and it is very difficult to say exactly
where, within these bounds, the total risk will be. Hence ‘aggregation risk’ is by far the most
important source of model risk in an operational VaR model.

9The bimodal density has been fitted by a mixture of two normal densities: with probability 0.3 the normal
has mean 14 and standard deviation 2.5 and with probability 0.7 the normal has mean 6 and standard
deviation 2. The other annual loss is gamma distributed with ơ = 7 and Ƣ = 2.

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 21
The PRM Handbook – III.C.3 Operational Value-at-Risk

The next most important source of model risk is the way that data are obtained and,
subsequently, how these data are handled. The most important risk types for a risk estimate (as
opposed to an expected loss estimate) are the low-frequency, high-severity risk types, such as
internal fraud, acts of God (such as earthquakes), massive legal suits and so forth. Internal
historical loss data on these risk types are simply not available as the frequency is just too low.
The only way to include these risk types in the risk estimate is to use subjective data: such as a
risk self-assessment, or external data from public sources, or from a data consortium. But risk
self-assessment questionnaires require careful design and even more careful control of the
responses, and external data need processing using proper statistical methods.

When data are subjective the proper statistical methods to use are Bayesian methods (see Chapter
II.E). However, at the moment we are witnessing attempts to apply some of the traditional
‘classical’ methods that have been developed in market risk to operational risk analysis, as if the
sparse and unreliable data on operational losses should be treated like the plentiful and reliable
data on market prices. Operational risks are not at all like market or credit risks – operational risk
analysis is about human behaviour, not market or firm behaviour! Thus we really should not be
talking about ‘correlations’, or ‘tail distributions’ with data such as these. And software
consultants who focus on fitting the ‘best’ functional form to operational risk data are missing
the big picture (and making a fast buck in the process!). Getting the right assumptions about
functional forms may be an important source of model risk in market and credit risk analysis but
this is really not an important issue in operational risk assessment. The important issues are how
to model dependencies and how to obtain reliable data. We may indeed apply a VaR metric to an
operational loss distribution, but that does not mean that operational risk analysis is a statistical
science with proper economic foundations, like market and credit risk. It is not. Operational risk
is a behavioural science.

References
Alexander, C (ed.) (2003) Operational Risk: Regulation, Analysis and Management. London: FT-
Prentice Hall (Pearson Education).

Basle Committee on Banking Supervision (2001) Working Paper on the Regulatory Treatment of
Operational Risk, Consultative Paper 2.5, September., available from www.bis.org

Gumbel, E J (1958) Statistics of Extremes. New York: Columbia University Press.

Embrechts, P, Klüppelberg, C, and Mikosch, T (1991) Modeling Extremal Events. Berlin: Springer-
Verlag.

Copyright © 2004 Carol Alexander and the Professional Risk Managers’ International Association 22
The PRM Handbook – III.0 Capital Allocation and RAPM

III.0 Capital Allocation and RAPM


Andrew Aziz and Dan Rosen1

III.0.1 Introduction
We introduce in this chapter the definitions and key concepts regarding capital, focusing on the
important role that capital plays in financial institutions. Readers should already be familiar with
Chapter I.A.5, which has presented the basic principles behind the capital structure of the firm.
There it was argued that the actual capital – the physical capital that a firm holds – should be
distinguished from the optimal level of capital. The optimal capital depends on many things,
including capital targets that are associated with a desired level of ‘capital adequacy’ to cover the
potential for losses made by the firm. Capital adequacy is assessed using internal models (for
economic capital) and for banks a certain level of capital is imposed by external standards (regulatory
capital). Capital adequacy is a measure of a firm’s ability to remain a going concern. Thus,
targeted levels of capital are direct functions of the riskiness of the business activities or, from a
balance sheet perspective, the riskiness of the assets.

In this chapter you will learn:


x the role of capital in financial institutions and the different types of capital;
x the key concepts and objectives behind regulatory capital, as well as the main calculations
principles in the Basel I Accord and the current Basel II Accord;
x the definition and mechanics of economic capital as well as the methods to calculate it;
x the use of economic capital as a management tool for risk aggregation, risk-adjusted
performance measurement and optimal decision making through capital allocation.

This introductory section presents the definition of capital and its role in financial institutions.
We make the distinction between the various types of capital: book capital, economic capital and
regulatory capital. We discuss briefly the use of economic capital as a management tool for risk
aggregation, decision making and performance measurement.

III.0.1.1 Role of Capital in Financial Institution


Banks generate revenue by taking on exposure to their customers and by earning appropriate
returns to compensate for the risk of this exposure. In general, if a bank takes on more risk, it
can expect to earn a greater return. The trade-off, however, is that the same bank will, in general,
increase the possibility of facing losses to the extent that it defaults on its debt obligations and is

1 Algorithmics Inc.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 1
The PRM Handbook – III.0 Capital Allocation and RAPM

forced out of business. Banks that are managed well will attempt to maximise their returns only
through risk taking that is prudent and well informed. The primary role of the risk management
function in a bank is to ensure that the total risk taken across the enterprise is no greater than the
bank’s ability to absorb worst-case losses within some specified confidence interval.

In its pure form, capital represents the difference between the market value of a bank’s assets and
the market value of its liabilities. Because capital can be viewed as a buffer against insolvency,
capital adequacy is a measure of a bank’s ability to remain a going concern under adverse conditions.

In contrast to a typical corporation, the key role of capital in a financial institution such as a bank
is not primarily one of providing a source of funding for the organisation. Banks usually have
ready access to funding through their deposit-taking activities, which can be increased fairly
fluidly. Instead, the primary role of capital in a bank, apart from the transfer of ownership, is to
act as a buffer to:
x absorb large unexpected losses;
x protect depositors and other claim holders;
x provide enough confidence to external investors and rating agencies on the financial
health and viability of the firm.

A firm’s credit rating can be seen as a measure of its capital adequacy and is generally linked to a
specific probability that the firm will enter into default over some period of time. If we make the
assumption that liabilities are riskless, then the credit rating of a firm becomes a function of the
overall riskiness of its assets and the amount of capital that the bank holds. Firms which hold
more capital are able to take on riskier assets than firms of similar credit rating which hold less
capital.

Typically, the sources of risk within the assets of a firm are classified as follows:
x credit risk – losses associated with the default (or credit downgrade) of an obligor (a
counterparty, borrower or debt issuer);
x market risk – losses associated with changes in market values;
x operational risk – losses associated with operating failures.

Capital represents an ideal metric for aggregating risks across both different asset classes and
across different risk types.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 2
The PRM Handbook – III.0 Capital Allocation and RAPM

III.0.1.2 Types of Capital


We can broadly classify capital into three types:2
x Economic capital (EC) – an estimate of the level of capital that a firm requires to operate its
business with a desired target solvency level. Sometimes this is also referred to as risk
capital.3
x Regulatory capital (RC) – the capital that a bank is required to hold by regulators in order
to operate; this is an accounting measure defined by the regulatory authorities to act as a
proxy for economic capital.
x Book capital (BC) – the actual physical capital held. While in its strictest definition this
should be simply equity capital, more generally this might also include other assets like
liquid debt or hybrid instruments

In practice, many firms hold book capital in excess of the required economic and even regulatory
capital. This reflects both historical and practical business reasons (for example, given their size,
they might be too slow to invest it effectively), as well as their more conservative view on the
applicability of the models. The combined forces of deregulation and the increased market
volatility in the late 1970s motivated many banks to aggressively grow market share and to
acquire increasingly riskier assets on their balance sheets. This emphasis on growth precipitated a
decline of capital levels throughout the 1980s that led to fears of increasing instability in the
international banking system. These concerns motivated the push for the creation of international
capital adequacy standards such as those ultimately established by the Basel Committee on
Banking Supervision (BCBS). The imposition of the Basel I Accord in 1988 proved to be
successful in its objective of increasing worldwide capital levels to desired levels by 1993 and,
ultimately, to reduce the overall riskiness of the international banking system.

In general, EC is meant to reflect the true ‘fair market’ value differential between assets and
liabilities, and thus it is limited by the ability to mark to market a balance sheet in a manner that is
indisputable for all key constituencies – the financial institution, the regulators and the investors
themselves. As such, the determination of EC has traditionally been highly institution-specific.

The foremost objective of regulations, however, is to define an unarguable standard for capital
comparison that creates a level playing field across all financial institutions. Thus, regulatory
capital has traditionally been defined with respect to accounting book value measures rather than

2 This is a general classification, and there are various alternative definitions of capital and terminology used to describe
them.
3 Some authors have used alternative definitions: for example, Matten (2000, pp. 222–223) defines economic capital as
risk capital plus goodwill; Perold (2001) defines risk capital in terms of insurance (explained in Section III.0.2.6).

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 3
The PRM Handbook – III.0 Capital Allocation and RAPM

to market value measures (notwithstanding the fact that, in some cases, accounting practice
allows balance sheet items to be reported on a market value basis).

To capture the discrepancy between fair values and market values, regulatory capital measures
incorporate ad hoc approaches to normalise asset book values to reflect differences in risk. As
such, it is recognised that regulatory capital calculations tend to contain a number of
inconsistencies, which have led regulators to set prescribed levels on a conservative basis. In
some cases, these inconsistencies have led to the notion of regulatory arbitrage, whereby investment
is determined not on the basis of risk–reward optimisation but on the basis of regulatory capital–
reward optimisation.

III.0.1.3 Capital as a Management Tool


Capital can be used as a powerful business management tool, since it provides a consistent metric
to determine:
x risk aggregation;
x performance measurement;
x asset and business allocation.

The objective of risk-adjusted performance measurement (RAPM) is to define a consistent metric


that spans all asset and risk classes, thereby providing an ‘apples to apples’ benchmark for
evaluating the performance of alternative business opportunities. RAPM thus becomes an ideal
tool for capital allocation purposes. By allocating the appropriate amount of EC to each asset, net
expected payoffs can then be expressed as returns on capital. Each asset can, therefore, be
assessed on a consistent basis, with returns adjusted appropriately in the context of the amount
of risk taken on. (This is further discussed in Sections III.0.4 and III.0.5).

Risk aggregation generally refers to the development of quantitative risk measures that
incorporate multiple sources of risk. The most common approach is to estimate the EC that is
necessary to absorb potential losses associated with each of the risks. EC can be seen as a
common measure that can be used to summarise and compare the different risks incurred by a
firm, across
x different businesses and activities;
x different types of risk – market risk, credit risk and operational risk.

According to a recent study (BCBS, 2003b), the application of risk aggregation and EC methods
is still in the early stages of its evolution. While some firms remain sceptical of the value of
reducing all risks to a single number, many now believe that there is a need for a common metric

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 4
The PRM Handbook – III.0 Capital Allocation and RAPM

that allows risk–return comparisons to be made systematically across business activities whose
mix of risks may be quite different (e.g., insurance versus trading). However, there remains a wide
variation in the manner in which aggregated risk measures such as EC are used for risk
management decision making in practice today.

III.0.2 Economic Capital


Economic capital acts as a buffer that provides protection against all the credit, market,
operational and business risks faced by an institution. EC is set at a confidence level that is less
than 100% (e.g., 99.9%), since it would be too costly to operate at the 100% level. The
confidence interval is chosen as a trade-off between providing high returns on capital for
shareholders and providing protection to the debt holders (and achieving a desired rating) as well
as confidence to other claim holders, such as depositors.

In so far as EC reflects the amount of capital required to maintain a firm’s target capital rating,
the confidence interval can be defined at a very high quantile of the loss distribution. For
example, to achieve a target S&P credit rating of BB, the probability of default over the next year
for the firm cannot be greater than 3.0%, so the quantile should be set at least at 97%. In
contrast, for the same firm to achieve a target S&P credit rating of BBB, it must lower its
probability of default to be at most 0.5%, corresponding to the 99.5% quantile of the loss
distribution. Given the desire to achieve a BBB rating and to remain solvent 99.5% of the time, a
firm must have enough capital to sustain a ‘0.5% worst-case loss’ over a one-year time horizon;
that is, 99.5% of the time the future value of the non-defaulted assets must be at least equal to
the future value of the liabilities. Example III.0.1 below gives a simple outline of how this could
be achieved.

III.0.2.1 Understanding Economic Capital


Denote by At and Dt the market values (at time t) of the assets and liabilities, respectively. The
available capital Ct for the current time, t = 0 ,and at the end of one year, t = 1, can be expressed
as
C0 = A0 – D0, (III.0.1)
C1 = A1 – D1.

If the nominal returns on the assets and liabilities are equal to rA and rD, respectively, then a
worst-case loss from all sources, l (i.e., when C1 = 0 for a given confidence interval), would result
in the value of assets at t = 1 just being sufficient to cover the value of debt in t = 1. Then
C1 = 0 = A0 (1 + rA)(1 l )  D0(1 + rD).

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 5
The PRM Handbook – III.0 Capital Allocation and RAPM

Thus the maximum amount of debt allowable to sustain solvency under the worst-case scenario
cannot exceed
D0 = A0 (1 + rA)(1 l )/(1 + rD). (III.0.2)

Since EC0 is the minimum amount of capital required to sustain such a loss, it is given by:
EC0 = A0 (1 – [(1 + rA)(1 l )/(1 + rD)]). (III.0.3)

Hence the minimum amount of EC a financial institution must take on in order to avoid
insolvency increases as the level of the worse-case loss l increases.

The expected return on EC over the period from t = 0 to t = 1 is given by


[E(EC1 ) / EC0]  1.

The expected return on EC reflects the impact of leverage on risk and reward. An increase in
expected returns (to compensate for increased risk) is reflected in the numerator, while the
increase in risk is reflected in the denominator (the current EC).

Equation (III.0.3) is often expressed in terms of value-at-risk notation in the following manner:
EC0 = A0 – VaR/(1+rD), (III.0.4)

where VaR represents the Ai value associated with the worst-case loss, l, corresponding to the
appropriate (x%) confidence interval.

For ease of presentation, consider the case where credit risk is the sole source of business risk to
which the firm is exposed. Returning to equation (III.0.3), under the simplifying assumption that
the spread between the nominal return on the assets and the return on the liabilities is roughly
equal to the expected default loss, u, then,
EC0 = A0 {1 – (1 + rD)(1 + u)(1 – l )/(1 + rD)} (III.0.5)
= A0 {1 – (1 + u)(1 – l )}.

By then ignoring second-order effects, equation (III.0.5) simplifies to the following more familiar
expression for economic capital:
EC 0 | A0 ˜ ( l  u ) . (III.0.6)

This relationship is illustrated with respect to a default loss distribution in Figure III.0.1.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 6
The PRM Handbook – III.0 Capital Allocation and RAPM

Credit Loss Distribution


Expected Loss

Volatility

Probability Unexpected Loss

VaR( D %)

0 P Loss

Figure III.0.1: Credit loss distribution: expected and unexpected losses

Expressions (III.0.4) and (III.0.6) highlight the link between VaR measures and EC. The
simplifying assumption leading to equation (III.0.6) and illustrated in Figure III.0.1 is the
approach commonly taken by practitioners and generally leads to conservative estimates (for a
detailed discussion, see Kupiec, 2002). Thus, in its most common definition, EC is defined to
absorb only unexpected losses (UL) up to a certain confidence level (i.e., A0(l – u)). Credit reserves are
traditionally set aside to absorb expected losses (EL) over the period (i.e., A0u). More precisely,
equation (III.0.4) shows that the VaR measure appropriate for EC should in fact measure losses
relative to the assets’ initial mark-to-market (MtM) value and not relative to the EL in its end-of-
period distribution. Also, the VaR measure should explicitly account for the interest payments on
the funding debt. While the UL approximation has very little effect on market risk, where the
horizon is short (and EL is small) it may have a higher impact in credit risk.

Example III.0.1
Consider a BBB-rated firm (or a firm that has targeted a BBB rating). Suppose the firm has
liabilities consisting of D0 = $92 million in deposits, with a cost of debt of rD = 5%, which have
been invested in A0 = $100 million of assets (40% at a nominal return of 6.75% and 60% at a
nominal return of 7%.). The weighted average nominal return across the $100 million in total
assets is rA = 6.9%, representing a compounded spread of 1.81%. If the nominal values of the
assets and liabilities are equal to the market values, then the current capital for this firm is
calculated as C0 = $8 million (the difference between the market value of the assets and the
market value of the liabilities).

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 7
The PRM Handbook – III.0 Capital Allocation and RAPM

Assume that, in a ‘0.5% worse-case scenario’ the firm has a potential for a loss of 15% in the
value of total assets. Under this scenario, the firm will become insolvent as the value of the assets
will be A1 = $100 million × 1.069 × 0.85 = $90.9 million, while the value of the liabilities will be
D1 = $92 million × 1.05 = $96.6 million, giving a capital shortfall of $5.7 million.

From equation (III.0.3), the minimum amount of capital the firm must hold to avoid insolvency
in the worst-case scenario is
EC0 = A0 (1 – [(1 + rA)(1 l )/(1 + rD)]) = 100(1 – [1.069(1 0.15)/1.05]) = 13.46.

Therefore, for the firm to improve its capital adequacy to the desired level, it must increase its
capital from $8 million to $13.46 million. The shareholders should be 99.5% sure that such an
increase in capital will ensure solvency from t = 0 to t = 1.

III.0.2.2 The Top-Down Approach to Calculating Economic Capital


EC can be seen as a common measure that can be used to summarise and compare the different
risks incurred by a firm, across different businesses and activities, and across different types of
risk: market, credit and operational risk. At the enterprise level, EC can be estimated based on
aggregate information of the firm’s performance. Such ‘top-down approaches’ generally use one
of two types of information: earnings or stock prices.

III.0.2.2.1 Top-Down Earnings Volatility Approach


A top-down approach based on a firm’s earnings makes the simplifying assumption that the
market value of capital is equal to the value of a perpetual stream of expected earnings. In other
words, by assuming that all expected future earnings of the firm are equal to the next period’s
expected earnings, the value of capital can be expressed as
C0 = Expected earnings / k,

where k represents the required return associated with the riskiness of the earnings. As the
determination of EC is based on the ability to sustain a worst-case loss associated with a given
confidence interval,
EC0 = EaR / k,

where EaR represents the difference between expected earnings and the earnings under the
worst-case scenario, for a given confidence interval (see Saita, 2003). Often this approach relies
on the additional assumption that earnings are normally distributed, and thus the confidence
interval can be determined as a multiple of the standard deviation.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 8
The PRM Handbook – III.0 Capital Allocation and RAPM

Limitations of the earnings volatility approach include the following:


x It requires historical performance data for reliable estimates of the mean and standard
deviation of earnings; few companies have enough data to yield reliable estimates.
x It does not link EC directly to the sources of risk.
x In general, it does not naturally allow capital to be separated out into its market, credit
and operational risk components, nor across different business lines or activities.

III.0.2.2.2 Top-Down Option-Theoretic Approach


A top-down approach based on the Black–Scholes–Merton (BSM) framework (see Chapter
III.B.5) assumes that the market value of capital can be modelled as a call option on the value of
the firm’s assets where the strike price is the notional value of the debt. If the value of assets at
the end of the period (t = 1) is greater than the value of the debt, then the value of capital is equal
to the difference between the value of the assets and the debt; otherwise (in the case of
insolvency), it is equal to zero. Using this approach assumes we have the following information
available:
x the current market value and volatility of the company’s net assets;
x the time horizon (e.g., the average duration of the firm’s assets);
x the risk-free interest rate (maturity corresponding to the time horizon);
x the default threshold (the asset level at which the debt holders demand repayment and
bankruptcy can occur).

The BSM model allows us to estimate the implied probability of insolvency for the firm over the
period from t = 0 to t = 1. The EC can then be determined on the basis of reapplying the BSM
model for a level of debt that ensures, even under the worst-case scenario (at a given confidence
interval), that the firm remains solvent.

An advantage of this approach over the one based on EaR is the availability of stock market data.
However, several simplifications regarding the capital structure and model assumptions must be
made to apply this tool in practice. Similar to the EaR approach, a key limitation is that it does
not allow the separation of capital into different risks such as market, credit and operational risks,
nor does it suggest how to allocate it across different business lines or activities.

III.0.2.3 The Bottom-Up Approach to Calculating Economic Capital


In this approach, EC is estimated by modelling individual transactions and businesses and then
aggregating the risks using advanced statistical portfolio models and stress testing. The bottom-
up approach has now become best practice and, in contrast to the top-down approaches,
provides greater transparency with regard to isolating credit risk, market risk and operational risk

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 9
The PRM Handbook – III.0 Capital Allocation and RAPM

capital. Furthermore, it naturally accommodates various methodologies to allocate capital to


individual businesses, activities and transactions.

In a bottom-up approach, the estimation of enterprise EC requires consolidation of risks at two


levels:
x First, it computes market risk, credit risk and operational risk at the enterprise level. To
achieve this, a firm might use an internal VaR model for market risk, a credit VaR
methodology for credit risk and a loss-distribution approach for operational risk.
x Then, at the second level, the firm must consolidate the capital across these risks.

To estimate total capital, it is currently common practice to add up the credit risk capital, market
risk capital and operational risk capital. This produces a conservative capital measure (basically
assuming that the risks are perfectly positively correlated). However, today, many firms are now
devoting considerable effort to measuring the correlations between these risks (and hence the
levels of diversification), as well as developing frameworks to measure these risks in a more
integrated way.

III.0.2.4 Stress Testing of Portfolio Losses and Economic Capital


In addition to the statistical approaches inherent in credit portfolio models, practitioners usually
use stress testing as an important part of their EC methodology (see Chapter III.A.4).
Commonly, the stress-testing methodology involves the development of one or several specific
adverse scenarios, which are judged to be extreme (falling beyond the desired confidence level).
Current portfolio losses are then assessed against these specific scenarios. Stress scenarios may be
based on historical experience or management judgment.

The translation of the specific stress scenario losses, and the combination of stress testing and
statistical measures, to develop EC measures is today more an art than a science, largely based on
management’s objectives and judgement. In essence, firms make their own decision on the
relative weights of the statistical and stress test results in estimating the amount of EC required to
support a portfolio. For example, an institution might assign the EC for market risk as 50% times
the 99% VaR plus 50% the loss outcome from some stress scenarios (thus normally being higher
than the actual 99% VaR).

III.0.2.5 Enterprise Capital Practices – Aggregation


A large firm, such as a bank, acquires different types of financial risk through various businesses
and activities. Capital is indeed a powerful tool for understanding, comparing and aggregating

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 10
The PRM Handbook – III.0 Capital Allocation and RAPM

different types of risk to determine the overall health of the firm and to support better business
decisions.

Such an institution is likely to have separate methodologies to measure market risk, credit risk
and operational risk. In addition, it is likely that the institution has different methodologies to
measure the credit risk of its larger commercial loans or its retail credits. More generally, a firm
may have any number of methodologies for various risks and segments. If each type of risk is
modelled separately, then the amounts of EC estimated for each need to be combined to obtain
an enterprise capital amount. In making the combination, the firm needs to incorporate, either
implicitly or explicitly, various correlation assumptions. The methods of aggregation are as
follows:
x Sum of stand-alone capital for each business unit and type of risk. This methodology
essentially assumes perfect correlation across business lines and risk types and does not
allow for diversification from them.
x Ad hoc or top-down estimates of cross-business and cross-risk correlation. In order to
allow for some cross-business and cross-risk diversification, a firm might aggregate the
individual stand-alone capital estimates using analytical models and simple cross-business
(asset) correlation estimates.
The enterprise aggregation of capital is still in its infancy and is a topic of much research today.

III.0.2.6 Economic Capital as Insurance for the Value of the Firm


Standard practice is to define EC as a buffer to cover unexpected losses. Thus it is defined in
terms of the tail of the loss distribution, using measures such as VaR. Alternatively, some
economists have used the term ‘risk capital’ to define capital in economic terms (Merton and
Perold, 1993; Perold, 2001): risk capital is the smallest amount that can be invested to insure the
value of the firm’s net assets4 against loss in value relative to a risk-free investment.

For example, under this definition, the risk capital of a long US Treasury bond position is the
value of a put option with strike equal to the forward price of the bond. As pointed out by Perold
(2001), in general, the put option accounts for the full distribution of losses, whereas VaR ignores
the magnitude of outcomes conditional on being in the extreme tail of the distribution. When
returns are normally distributed, the value of such a put option is approximately proportional to
the standard deviation of the return on the bond, and thus is approximately proportional to VaR.

4 ‘Net assets’ refers to ‘gross assets minus customer liabilities (swaps, insurance contracts, etc.), valued as if these
liabilities are default-free.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 11
The PRM Handbook – III.0 Capital Allocation and RAPM

III.0.3 Regulatory Capital


This section considers the key concepts and objectives behind regulatory capital, as well as the
main principles used in regulatory capital calculations in the Basel I Accord and the latest
proposals of the current Basel II Accord.

III.0.3.1 Regulatory Capital Principles


In this subsection, we focus mainly on the capital regulation in the banking industry. As defined
in Section III.0.1, regulatory capital refers to the capital that an institution is required to hold by
regulators in order to operate. It is largely an accounting measure defined by the regulatory
authorities to act as a proxy for economic capital. Capital adequacy is generally the single most
important financial measure used by banking supervisors when examining the financial
soundness of an institution.

As also mentioned in Section III.0.1, from an internal bank perspective, capital is designed as a
buffer to absorb large unexpected losses, protect depositors and other claim holders, and provide
enough confidence to external investors and rating agencies on the financial health and viability
of the firm. In contrast, from the external perspective of the regulator, capital adequacy
requirements fulfil two objectives:
x Reducing systemic risk: to safeguard the security of the banking system and ensure its
ongoing viability. In a sense, national governments act as guarantors. They have an
interest in ensuring that banks remain capable of meeting their obligations and in
minimising potential systemic effects on the economy. Regulatory capital helps to ensure
that banks bear their share of the burden, otherwise borne by national governments.
x Creating a level playing field: to ensure a more even playing field for internationally active
banks, by submitting all banks to (roughly) the same rules.

As we review the key concepts in regulatory capital, it is important to highlight two points:
x Regulatory requirements are continuously changing, and it is vital for practitioners to be
familiar with both the latest regulations and the specific requirements in each
jurisdiction.
x While the general intention is to make regulatory capital more risk-sensitive and align it
more closely to economic capital, it is important to understand the limitations of using it
directly for managing risk, measuring performance and pricing credits.

The overall objective should be to set up an enterprise risk management framework, which
measures economic capital and reconciles it with regulatory capital.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 12
The PRM Handbook – III.0 Capital Allocation and RAPM

III.0.3.2 The Basel Committee of Banking Supervision and the Basel Accord
A key cornerstone of international banking capital regulation is the Basel Committee on Banking
Supervision, which first introduced the framework for international capital adequacy standards.
This framework has been adopted as the underlying structure of all bank capital adequacy
regulations throughout the G10, as well as many other countries around the world. Today, over
100 countries are expected to implement the latest guidelines set by the BCBS, also referred to as
the Basel II accord).

Summary Chronology of Banking Regulatory Capital


x 1988 – The BCBS introduces the framework for international capital adequacy standards.
It is adopted throughout the G10, as well as in over 100 other countries (BCBS, 1988).
Commonly referred to as the Basel I Accord or the BIS I Accord, it was the first step in
establishing a level playing field across member countries for internationally active banks.
The 1988 accord focused mainly on credit risk.

x 1995 – An amendment to the initial accord further allows banks to reduce ‘credit-
equivalent exposures’ when netting agreements are in place (BCBS, 1995).

x 1996 – The 1996 amendment5 extends the capital requirements to include risk-based
capital for the market risk in the trading book (BCBS, 1996).

x 1999 – The BCBS issues a proposal for a new capital adequacy framework to replace the
1988 Basel I Accord. This is commonly referred to as the Basel II Accord or BIS II Accord.
The new accord attempts to improve the capital adequacy framework by substantially
increasing the risk sensitivity of the minimum capital requirements, and also
encompassing a supervisory review and market discipline principles. Under the
proposal, banks are required specifically to allocate capital against operational risks for
the first time.

x 2000–2003 – The BCBS releases various consultation documents and conducts major
data collection exercises called quantitative impact studies (QIS), intended to gather
information to assess whether it has met its goals.

x April 2003 – The BCBS releases the third consultative paper (CP3) on the new Basel
Accord (BCBS, 2003a).

5 Sometimes referred to as BIS 98, after its date of implementation.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 13
The PRM Handbook – III.0 Capital Allocation and RAPM

x June 2004 – The final version of the Basel II Accord is published (BCBS, 2004).

x 2006–2007 – Currently scheduled implementation of Basel II.

All the papers from the BCBS can be downloaded from www.bis.org.

III.0.3.3 Basel I Regulation


The 1988 accord focused mainly on credit risk, establishing minimum capital standards that
linked capital requirements to the credit exposures of banks. Prior to its implementation in 1992,
bank capital was regulated through simple, ad hoc capital standards. While generally prescriptive,
Basel I left various choices to be made by local regulators, thus resulting in several variations of
the implementation across jurisdictions. The 1996 amendment further extended the capital
requirements to include risk-based capital for the market risk in the trading book. Basel I does
not cover capital charges for operational risk.

III.0.3.3.1 Minimum Capital Requirements under Basel I


Capital requirements under Basel I are the sum of:
x credit risk capital charge, which applies to all positions in the trading and banking books
(including OTC derivatives and balance sheet commitments);
x market risk capital charge for the trading book portfolio and off-balance sheet items.

For market risk capital, the accord allows, in addition to a standardised method, the use of
internal VaR models covering both general market risk (or systemic risk) and specific risk.
Specific VaR applies to both equities and bonds. For bonds it covers the risk of defaults,
migration and changes in spreads. The reader is referred to Chapter III.A.2 for the basics of
market risk VaR.

The regulatory charge for banks using internal market risk models is given by
Trigger
Market Risk Capital [ M MR ˜ VaR  M SR ˜ SpecificVaR ] ˜ , (III.0.7)
8

where VaR and Specific VaR denote, respectively, the 99% market VaR and specific VaR over a
10-day horizon, and MMR and MSR are multipliers designed to adjust the capital to cover for
modelling errors and reward the quality of the models. The first one ranges between 3 and 4, and
the second one between 4 and 5. Finally, the Trigger is related to quality of controls in the bank.
Currently it is set to 8 in North America and between 8 and 25 in the UK.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 14
The PRM Handbook – III.0 Capital Allocation and RAPM

The methodology for credit capital is simple. Minimum capital requirements are obtained by
multiplying the sum of all the risk-weighted assets by the capital adequacy ratio of 8% (also
referred to as the Cook ratio):
§ ·
Capital ¨ ¦ RWAk ¸ ˜ 8% . (III.0.8)
© k ¹

Thus the calculation of credit regulatory requirements has three steps: converting exposures to
credit equivalent assets; computing loan equivalents for off-balance sheet and OTC portfolios;
and applying the capital adequacy ratio. This is described in greater detail in Chapter III.B.6.

A great strength of Basel I is the simplicity of the framework. This has allowed it to be
implemented in countries with different banking and accounting practices. Thus, it has been quite
successful in achieving its two general objectives (to safeguard the stability of the banking system
and to ensure an level playing field internationally).

Its simplicity also has been its major weakness, as the accord does not effectively align regulatory
capital requirements closely with an institution’s risk. For example, some criticisms on the credit
risk capital include the lack of proper differentiation for credit quality and maturity, insufficient
incentives for credit mitigations techniques, and lack of recognition of portfolio effects (these are
discussed briefly in Chapter III.B.6).

III.0.3.3.2 Regulatory Arbitrage under Basel I


The lack of differentiation in the accord, together with the financial engineering advances in
credit risk over the last decade, have lead to the development of a regulatory capital arbitrage
industry. This refers to the process by which regulatory capital is reduced through instruments
such as credit derivatives or securitisation, without an equivalent reduction of the actual risk
being taken. Through regulatory arbitrage instruments, for example, banks typically transfer low-
risk exposures from their banking book to their trading book, or simply place them outside the
regulated banking system.

III.0.3.3.3 Meeting Capital Adequacy Requirements


Available regulatory credit capital is divided into two categories:
x Tier 1 capital: essentially shareholder funds – equity – and retained earnings.
x Tier 2 capital: long-term subordinated debt, other qualifying hybrid instruments and
reserves (such as loan loss reserves).

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 15
The PRM Handbook – III.0 Capital Allocation and RAPM

From a regulatory perspective, Tier 1 capital must cover at least 50% of the total capital; that is,
Tier 2 cannot exceed Tier 1 capital. In addition, the subordinated debt included in Tier 2 cannot
exceed 50% of the Tier 1 capital.6

Capital adequacy is generally expressed as a ratio. For example, an 8% capital ratio means that the
total Tier 1 and Tier 2 capital is 8% of the risk-weighted assets (RWA). A 6% Tier 1 capital ratio
refers to Tier 1 capital being 6% of the RWA.

III.0.3.4 Basel II Accord – Latest Proposals


The final version of the Basel II Accord was published in June 2004 (BCBS, 2004).7 Its
implementation will take effect between the end of 2006 and the end of 2007.

In this subsection we present a brief summary of the basic principles of Basel II; the key
formulae for minimum capital requirements for credit risk are given in Chapter III.B.6. For
greater detail, the reader is referred to the BCBS papers.

Basel II attempts to improve capital adequacy framework along two important dimensions:
x First, the development of a capital regulation that encompasses not only minimum
capital requirements, but also supervisory review and market discipline.
x Second, a substantial increase in the risk sensitivity of the minimum capital requirements.

The new accord intends to foster a strong emphasis on risk management and to encourage
ongoing improvements in banks’ risk assessment capabilities. This is to be accomplished by
closely aligning banks’ capital requirements with prevailing modern risk management practices,
and by ensuring that this emphasis on risk makes its way into supervisory practices and into
market discipline through enhanced risk- and capital-related disclosures.

The Basel II Accord consists of three pillars: minimum capital requirements, supervisory review, and
market discipline. We briefly summarise these below and then present the key principles behind the
computation of minimum capital requirements.

III.0.3.4.1 Pillar 1 - Minimum Capital Requirements


Minimum capital requirements consist of three components:
1. definition of capital (no major changes from the 1988 accord);

6 A Tier 3 capital was introduced with the market risk requirements. Short-term subordinated debt can be used to meet
market risk requirements as well, but not credit risk.
7 A small number of open issues are still to be resolved during 2004.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 16
The PRM Handbook – III.0 Capital Allocation and RAPM

2. definition of RWA;
3. minimum ratio of capital/RWA (remains 8%).

Basel II proposes to modify the definition of risk-weighted assets in two areas:


x substantive changes to the treatment of credit risk relative to the Basel I Accord;
x the introduction of an explicit treatment of operational risk that will result in a measure
of operational risk being included in the denominator of a bank’s capital ratio.

Basel II moves away from a one-size-fits-all approach to the measurement of risk, through the
introduction of three distinct options for the calculation of credit risk and three others for
operational risk. These approaches present increasing complexity and risk-sensitivity. Banks and
supervisors can thus select the approaches that are most appropriate to the stage of development
of banks’ operations and of the financial market infrastructure. Chapter III.B.6 briefly reviews the
three credit risk approaches. The operational risk approaches can be found in Chapter III.C.3.

III.0.3.4.2 Pillar 2 - Supervisory Review


The second pillar is based on a series of guiding principles, which point to the need for banks to
assess their capital adequacy positions relative to their overall risks, and for supervisors to review
and take appropriate actions in response to those assessments. Banks under internal ratings-based
credit models will be required to demonstrate that they use the outputs of those models not only
for minimum capital requirements but also to manage their business. The inclusion of
supervisory review provides benefits through its emphasis on strong risk assessment capabilities
by banks and supervisors alike. Important new components of Pillar II also include the treatment
of stress testing, concentration risk and the residual risks arising from the use of collateral,
guarantees and credit derivatives as well as specific securitisation exposures (these are discussed
further in Chapter III.B.6).

III.0.3.4.3 Pillar 3 - Market Discipline


Also referred to as public disclosure, the third pillar aims to encourage safe and sound banking
practices through effective market disclosures of capital levels and risk exposures. This will help
market participants assess better a bank’s ability to remain solvent.

III.0.3.5 A Simple Derivation of Regulatory Capital


Recognising that the simple difference between the value of assets and the value of liabilities in
accounting value terms is not a good indicator of the true difference in market value terms has
led regulators to make appropriate adjustments in the calculation of regulatory capital. In this
section, we follow a similar approach to Section III.0.2.1 to understand regulatory capital.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 17
The PRM Handbook – III.0 Capital Allocation and RAPM

For a typical bank balance sheet, the difference between assets and liabilities includes general
provisions GP, and reserves R0, as well as the book value of equity E0{BV}:
A0{B/S} – L0{BV} = GP0 + E0{BV} + R0. (III.0.9)

The amount of available capital for regulatory purposes, however, can be defined loosely as the
difference between total assets (balance sheet as well as non-balance sheet) and only that
component of total liabilities where non-payment of returns defines insolvency. The amount of
available capital can thus be represented as follows:
RC0 = A0{B/S} + A0{non-B/S} – D0{BV}. (III.0.10)

Balance sheet assets are the net of the book value of assets and any special provisions to account
for defaulted or nearly-defaulted positions, while non-balance sheet assets consist of any
revaluations, RV0, to market value as well as any undisclosed profits, UP0. That is,
A0{B/S} = A0{BV} – SP0 (III.0.11)
A0{non-B/S} = RV0 + UP0.

Those liabilities whose non-payment constitutes insolvency represent the difference between
total liabilities and the quasi-debt, QD0, that combines both debt and equity features:8
D0{BV} = L0{BV} – QD0{BV}. (III.0.12)

Combining these relationships provides a very straightforward definition of regulatory capital in


book value terms that is designed to be as good a proxy as possible for true market valuation:
RC0 = E0{BV} + R0 + QD0{BV} + GP0 . (III.0.13)

The first two terms of this definition of regulatory capital are referred to as Tier 1 capital, while
the second two terms are referred to as Tier 2 capital. Capital adequacy standards are based on
minimum requirements for each of the two tiers of capital.

Note that the above adjustments to book value measures still do not adequately capture the true
market values and, hence, the true riskiness of the assets. Therefore, standards on regulatory
capital are typically expressed as minimum percentages of risk-weighted assets rather than total
assets. An RWA is expressed as a percentage of the nominal value of a balance sheet asset. For
example, under the first Basel Accord loans to private companies are assigned a risk weight of
100%, while loans to banks in OECD countries are assigned a risk weight of 20%, reflecting the
perceived differences in risk between the two categories of borrowers.

8 Non-payment of quasi-debt does not imply insolvency, at least for a period of time.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 18
The PRM Handbook – III.0 Capital Allocation and RAPM

III.0.4 Capital Allocation and Risk Contributions


We discuss the importance for allocating economic capital to different business units in a firm –
or to the constituents of a ‘portfolio’ – and the key methodologies to compute contributions to
EC.

III.0.4.1 Capital Allocation


In addition to computing the total EC for a firm or portfolio, it is important to develop general
methodologies to
x attribute this capital a posteriori to various ‘sub-portfolios’, such as the firm’s activities,
business units and even individual transactions, and
x allocate it a priori in an optimal fashion, to maximise risk-adjusted returns.

EC allocation down to the portfolio is required for:


x management decision support and business planning;
x performance measurement and risk-based compensation;
x pricing, profitability assessment and limits;
x building optimal risk–return portfolios and strategies.

From a strategic management perspective, the allocation of EC to business units, activities and
transactions is an issue that is receiving significant attention in the industry today. There are two
views prevalent among firms:
x Diversification benefits should not be passed down to the business units. Rather, each
unit is expected to operate on a stand-alone basis.
x An ‘optimal’ level of group risk taking can be achieved only when diversification benefits
are allocated to at least the major business units (and perhaps even to the transaction
level). Thus, it is preferable for each business unit to be assigned an EC allocation closer
to its ‘marginal contribution’ to the total EC.

In the general case, the sum of the stand-alone EC for each asset or business does not equal the
total portfolio EC. Indeed, it is higher, since there are diversification benefits. Thus, it is
important to devise a general methodology to assign capital to individual business units, activities
and assets, which explicitly allocates the diversification benefits of the portfolio.

III.0.4.2 Risk Contribution Methodologies for EC Allocation


There is no unique method to allocate EC down a portfolio, and thus it is important to
understand how risk contribution tools can be applied to EC allocation decisions. Whether it is

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 19
The PRM Handbook – III.0 Capital Allocation and RAPM

EC contributions to business units, arbitrary sub-portfolios or assets, we can classify the


allocation methodologies which are currently used in practice into three categories: stand-alone EC
contributions; incremental EC contributions; and marginal EC contributions.9 Every methodology
has its advantages and disadvantages, and might be more appropriate for a particular managerial
application.

III.0.4.2.1 Stand-alone EC Contributions


An individual business or sub-portfolio is assigned the amount of capital that it would consume
on a stand-alone basis (e.g., if it were an independent firm). As such, it does not reflect the
beneficial effect of diversification. The resulting sum of stand-alone capital for the individual
business units, activities or sub-portfolios is generally greater than the total EC for the firm.

III.0.4.2.2 Incremental EC Contributions


This method is also referred sometimes as the discrete marginal EC allocation method. Under this
method the EC allocated to a business unit or sub-portfolio attempts to capture an appropriate
amount of risk capital that the unit contributes to the entire firm’s capital requirements. It is
calculated by taking the EC computed for the entire firm (including the business unit or sub-
portfolio) and subtracting from it EC for the firm without the business unit or sub-portfolio.
This methodology thus captures exactly the amount of capital that would be released if the
business unit were sold or added (everything else remaining the same).

Incremental EC is a natural measure for evaluating the risk of acquisitions or divestitures.10 But,
while very intuitive, a disadvantage of this methodology is that it is not additive. While it does
capture the benefits of diversification, the sum of incremental EC for all the firm’s business units
(activities or sub-portfolios) is smaller than (or equal to) total EC for the firm.

III.0.4.2.3 Marginal EC Contributions


It would be useful to obtain measures of risk contributions that are additive. Sometimes referred
to as diversified EC contributions, such measures are intended to capture the amount of the firm’s
total capital that should be allocated to a particular business or sub-portfolio when viewed as part
of a multi-business firm. They are specifically designed to allocate the diversification benefit
among the business units and activities, in the form of reduced EC. Thus, by construction, the

9 The reader is cautioned that there is currently no universal terminology for these methodologies in the literature. As
defined here, incremental capital (risk) contributions are sometimes also referred to as marginal capital (risk) contributions or
discrete marginal capital (risk) contributions. Marginal risk contributions, as termed here, are sometimes also referred to as
diversified capital (risk) contributions or, more precisely, continuous marginal capital (risk) contributions (Smithson, 2003).
10 See, for example, Perold (2001) – note that the author refers to this as ‘marginal’ EC.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 20
The PRM Handbook – III.0 Capital Allocation and RAPM

sum of diversified EC for all the firm’s business units and activities is equal to total EC for the
firm.

There are various methodologies that produce additive risk contributions. The most widespread
and, perhaps, practical methodology is the one based on marginal risk contributions. While most
explanations of this risk decomposition methodology are based on the use of volatility (or
standard deviation) as a risk measure, the methodology is quite general and applicable to other
risk measures. Volatility-based contributions are common practice today (see Smithson, 2003).
However, such allocations can be ineffective for credit and operational risk, given the non-
normality of their loss distributions. Industry best practices are shifting towards allocations based
on VaR or expected shortfall (ES) – see Chapters III.A.2, III.A.3 and III.B.5.11

An additive decomposition of EC is of the form:


EC ¦ EC
i
i , (III.0.14)

where EC i denotes the EC contribution of business unit or sub-portfolio i. We can then define
the percentage risk contribution of the ith business unit or sub-portfolio as:
EC i
EC Contribi ˜ 100% . (III.0.15)
EC

Denoting by xi the size of the ith business unit or sub-portfolio, one can show that for EC based
on volatility, VaR or ES:12
wEC ( x )
EC i ˜ xi . (III.0.16)
w xi

That is, an EC marginal contribution is the product of the size of business unit i and the rate of
change of EC with respect to that position. This product essentially represents the rate of change
of EC with respect to a small (marginal) percentage change in the size of the unit.

Marginal EC contributions require the computation of the first derivative of the risk measure
with respect to the size of each unit. When the risk measure used is volatility, they can be
computed analytically and are simply given by the covariance of losses of that business unit with
the overall portfolio divided by the volatility of losses (see Praschnik et al., 2001; Smithson, 2003):

11 While VaR is defined as a loss which cannot be exceeded x% of the time (a quantile of the loss distribution), ES is
commonly defined as the expected loss, conditional on reaching at least an x% loss (i.e., it is the average of the x%
largest losses). Sometimes ES is also referred to as ‘tail conditional expectation’ or ‘conditional VaR’. For some
discussions on the use of ES and VaR for capital allocation, see Kalkbrener et al. (2004) and Mausser and Rosen (2004).
12 More formally, if the risk measure is homogeneous of degree 1 and differentiable, this follows from Euler’s
theorem. This is a requirement of coherent risk measures (Artzner et al., 1999).

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 21
The PRM Handbook – III.0 Capital Allocation and RAPM

EC Contribi Cov( L i , L )/Ƴ( L ) .

The general theory behind the definition and computation of these derivatives in terms of
quantile measures (VaR, ES) has been also developed in the last few years (see Gouriéroux et al.,
2000; Tasche, 2000, 2002; see also Chapter III.B.5).

It is important to stress that these contributions must be interpreted on a marginal basis.


Marginal EC contributions are very general and are best suited to understand the amount of
capital to be consumed by an instrument or portfolio (which really is small compared to the
whole firm). They also naturally explain how to move EC from one business to another (on a
marginal basis). Proponents of marginal EC approaches point out that incremental EC always
under-allocates total firm EC and that, even if the incremental EC allocations were scaled up, the
signals are potentially misleading. However, marginal EC is likely to be suboptimal for analysing
the addition or removal on an entire business, which is not marginal to the firm.

Example III.0.2: Capital Allocation Methods


Table III.0.1 illustrates the different capital allocation methods for a simple firm consisting of
three business lines. Assume, for simplicity that the total losses over one year for each business
are normally distributed, and that they are uncorrelated. The stand-alone capital of each line is,
respectively, $50 million, $30 million and $20 million. The total stand-alone capital is thus $100
million. The total economic capital of the firm is simply given by

EC EC 12  EC 22  EC 32 61.64 million.

Table III.0.1: Capital allocation methods for a simple firm


% Incremental Marginal
Stand-alone % Stand-alone Incremental Marginal
Capital Capital
Capital (m) Contributions Capital (m) Capital (m)
Contributions Contributions
Business 1 50.00 50.0% 25.59 69.7% 40.56 65.8%
Business 2 30.00 30.0% 7.79 21.2% 14.60 23.7%
Business 3 20.00 20.0% 3.33 9.1% 6.49 10.5%
Total 100.00 100.0% 36.72 100.0% 61.64 100.0%
% EC 162.2% 59.6% 100.0%

The last line of this table gives the total capital as a percentage of EC. Notice that the total stand-
alone capital represents a 62% increase of EC. Column 4 (and 5) in the table give the incremental
capital for each business (in money terms and percentage contributions). The sum of incremental

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 22
The PRM Handbook – III.0 Capital Allocation and RAPM

capital is only 59.6% of EC. Finally, the last two columns give the marginal contributions (in
money and percentage terms). Marginal contributions add up to the total EC. Note in particular
that the stand-alone percentage contributions for each business differ meaningfully from the
marginal contributions. While the largest business (business 1) contributes one half of the stand
alone capital, it represents almost two-thirds of the EC on a marginal basis. This can be
understood from the fact that as the biggest unit, increasing its share of the portfolio also
marginally increases the overall risk. To diversify risk in an optimal way, one would rather
increase the share of the smaller units.

III.0.4.2.4 Alternative Methods for Additive Contributions13


Recently there has been discussion of several alternative methods arising from game theory,
which allocate the diversification benefits across the portfolio and yield additive risk
contributions (see Denault, 2001; Koyluoglu and Stoker, 2002). Game-theoretic tools are
commonly applied to problems involving the attribution of cost among a group and, hence, can
potentially offer a useful framework for identifying ‘fair’ EC attributions. An example of these
tools is the Shapley method, which describes how coalitions can be formed so that a group of
units benefits more as a group than if each works separately. In this approach, the EC assigned to
a unit becomes a cost, and each unit attempts to minimise its cost. A player, of course, leaves the
coalition if it is attributed a larger share of EC than its own stand-alone EC. This method is
computationally intensive, and may be impractical for problems with even a small number of
business units. A variant called the Aumann–Shapley method further allows for ‘fractional’ units
and requires less computation; thus, it is potentially more practical. Under most (but not all)
conditions, both these methods may yield similar results to marginal contributions. While these
methods are today receiving some academic attention, they are mostly not yet used in practice by
financial institutions.

III.0.5 RAROC and Risk-Adjusted Performance


We describe the objectives of risk-adjusted performance measurement, the role of capital
allocation, and the basic principles of risk-adjusted return on capital.

III.0.5.1 Objectives of RAPM


Banks traditionally measured their performance relative to their balance sheet assets, either simply
with respect to overall asset size, or by bringing in the notion of profitability, with respect to

13 This section is added for completeness and is not mandatory.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 23
The PRM Handbook – III.0 Capital Allocation and RAPM

returns on assets (ROA). There are a number of issues that make these approaches far from ideal,
of which two are quite fundamental.

The first issue is that by focusing solely on assets, the performance impact of financial leverage is
ignored as it pertains to managing risk and return for shareholders. In addition to the leverage
effect, today many banks have off-balance sheet exposures that are ignored or, at least, not well
captured by the assets as represented on a typical balance sheet.

The second fundamental issue is that a simple ROA measure does not distinguish between
different classes of assets with varying levels of risk. Recall that balance sheet assets are typically
book value based and not market value based.

Early attempts to address these issues focused on shifting to performance measures that are
defined relative to capital rather than to assets. This approach addresses the first fundamental
issue as return on equity (ROE) captures the impact of financial leverage as well as, in theory, the
impact of non-balance sheet assets. Both EC models and regulatory capital models attempt to
address the second issue by focusing on market valuation directly, as in the case of economic
models, or by adjusting book-value measures, as in the case of regulatory models.

The objective of a risk-adjusted performance measure is to define a consistent metric that spans
all asset and risk classes, thereby providing an ‘apples to apples’ benchmark for evaluating the
performance of alternative business opportunities. RAPMs thus become an ideal tool for capital
allocation purposes.

RAPMs come in many different forms, but they can all be loosely defined as a return on capital
whereby the measurement of asset riskiness is a key component of the derivation of the formula.
In most of the more sophisticated applications of RAPM, asset riskiness is modelled explicitly
with respect to a distribution of default-adjusted returns directly, or with respect to a distribution
of default losses that is then netted against nominal returns on assets. From these distributions,
expected, worst-case, and x% confidence interval default losses (or default-adjusted returns) can
be defined and applied to either the return measures or the underlying capital measure.

Two broad classes of RAPM measures include risk-adjusted return on capital (RAROC) and return on
risk-adjusted capital (RORAC). The former applies the risk adjustment to the numerator while the
latter applies the risk adjustment to the denominator. Often the distinction between these
approaches becomes blurred, with risk adjustments occurring in both the numerator and the
denominator, prompting the increasing usage of the term risk-adjusted return on risk-adjusted capital

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 24
The PRM Handbook – III.0 Capital Allocation and RAPM

(RARORAC). Nonetheless, common to all these approaches is the principle of incorporating the
joint default likelihood of a bank’s obligors explicitly into a bank’s RAPM.

III.0.5.2 Mechanics of RAROC


All RAROC models follow the simplified general formula
RAROC = (revenues – costs – expected losses) / capital. (III.0.17)

Revenues include all the nominal returns on assets, while costs include all returns to the liabilities
holders of the bank. Note that, in practice, all other sources of revenue, including service fees,
and all other sources of costs, including general overhead, would normally be incorporated into
the RAROC measure. Expected losses would be determined by a risk assessment of the asset
base, capturing losses arising from all sources, including credit risk, market risk and operational
risk.

The example in the introduction considered a bank whose only activities were the taking in of
deposits and the extending of credit. In that case the above equation can be rewritten as,
RAROC = (A0 rA – D0 rD – expected losses) / EC0. (III.0.18)

While a RAPM measure like RAROC is certainly a better indicator of a firm’s overall
performance relative to other firms than a more traditional ROA or ROE approach, the true
benefit for an individual firm is that it provides a consistent metric to evaluate the performance
of the firm’s portfolio of assets, regardless of their unique levels of risk. Taken one step further, it
also provides a benchmark for making allocation decisions, while appropriating scarce capital
amongst possible new investment opportunities.

Example III.0.3: A Simple Model for RAROC


We return to Example III.0.1, with a firm with liabilities of D0 =$92 million in deposits at a cost
of debt of rD = 5%, which have been invested in A0 = $100 million of assets (40% at a nominal
return of 6.75% and 60% at a nominal return of 7%). The weighted average nominal return
across the $100 million in total assets is rA = 6.9% (a compounded spread of 1.81%). The
current capital for this firm was calculated as E0 = $8 million.

The firm’s balance sheet can be conceptually decomposed into two balance sheets, one associated
with each asset class. The amount of debt each asset class can support is determined by the
amount of EC that must be allocated to that asset class. Assuming the total worst-case loss in
value associated with asset class 1 is 18%, then the economic capital, EC1,0, associated with asset
class 1, A1,0, can be determined in a similar manner to the EC for the entire firm:

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 25
The PRM Handbook – III.0 Capital Allocation and RAPM

EC1,0 = A1,0{1 – (1 + r1,A)(1 l )/(1 + rD)}

= 60{1 – 1.07(1  0.18)/1.05} = 9.86.

Likewise, assuming the total worst-case loss in value associated with asset class 2 is 10.5%, the
EC, EC2,0, associated with asset class two, A2,0, can be determined from equation (III.0.3) as
EC2,0 = A2,0{1 – (1 + r2,A)(1  l )/(1 + rD)}
= 40 {1 – 1.0675(1 0.105)/1.05} = 3.60.

For illustrative purposes, the current balance sheet can therefore be decomposed on the basis of
asset class as shown in Table III.0.2.

Table III.0.2: Current balance sheet


Asset Class 1 Asset Class 2 Total
Assets 60 40 100
Debt 50.14 36.4 86.54
EC 9.86 3.8 13.46

Table III.0.3 illustrates the expected balance sheets for each asset at year end and, thus, the
RAROC or the return on EC for each asset can be measured by the change in the equity position
over the year (RAROC in is calculated as EC1/EC0 –1).

Table III.0.3: Expected balance sheet


Asset Class 1 Asset Class 2 Total
Assets 63.2 42.16 105.36
Debt 52.64 38.22 90.86
EC 10.56 3.94 14.5
RAROC 7.01% 9.5% 7.68%

Of course, the change in equity must reconcile with the more familiar relationship,
EC1 – EC0 = (A1 – D1) – (A0 – D0 )
= A0 rA – D0 rD – expected losses.

In this example, while asset 1 has the higher nominal return, its RAROC is in fact slightly lower
than that of asset 2 as proportionately more EC must be allocated to it to compensate for its
higher risk. In other words, the excess nominal return of asset 1 over asset 2 is not quite enough

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 26
The PRM Handbook – III.0 Capital Allocation and RAPM

to compensate for its increased risk and therefore, on a risk-adjusted basis, asset 1 is a less
desirable investment.

Note that in this simple example the sum of the EC of each asset class equals the EC of the firm
as a whole. This implies that no diversification exists between the two asset classes because the
risk of each asset class on a stand-alone basis is equal to each asset class’s contribution to the
overall risk of the firm.

III.0.5.3 RAROC and Capital Allocation Methodologies


When RAROC is used to measure the performance of an asset classes or business, or to allocate
capital, it is important to highlight that the denominator measures the capital contribution of the
asset or business to the overall portfolio. Hence, it is directly linked to the asset allocation
methodology chosen by the institution (see previous section). Thus, for example, if a firm uses
stand-alone risk contributions, the measure of performance will not account for the
diversification opportunities that a given asset or business brings to the overall portfolio. In
general, it is beneficial to use allocation methods that account for diversification, such as the
marginal risk contributions.

Example III.0.4:
In the simple example above, the capital contribution of each asset was measured as the capital
each asset consumes in the scenario that produces the ‘extreme’ 1% loss. Both asset classes incur
a 12% loss in this scenario. This is actually consistent with the marginal risk allocation
methodology.14 Furthermore, in this simple example, the marginal contributions coincide with
the stand-alone contributions given the discrete nature of the problem and the high correlation of
the asset classes implied by the scenarios.

III.0.6 Summary and Conclusions


The primary role of capital in a firm, apart from the transfer of ownership, is to act as a buffer to
absorb large unexpected losses, protect depositors and other claim holders, and give external
investors and rating agencies enough confidence in the financial health and viability of the firm.

We distinguish between three different types of capital: the actual, physical capital that a firm
holds (book capital); the economic capital associated with a targeted level of solvency and

14 One can show that for quantile-based measures such as VaR, the derivative in equation (III.0.16), which leads to the
marginal capital allocated to a given asset class (or sub-portfolio), is given by the expected losses of that asset
conditional on the total portfolio losses being equal to VaR – that is, the expected losses corresponding to all scenarios
which lead to the given VaR (see Gouriéroux et al., 2000).

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 27
The PRM Handbook – III.0 Capital Allocation and RAPM

assessed through the use of internal models; and, for banks, the minimum capital imposed by
regulatory authorities (regulatory capital). In practice, many firms hold book capital in excess of
the required economic and regulatory capital. This reflects some historical and practical business
considerations and a more conservative view on the applicability of the models.

Economic capital is a powerful business management tool, since it provides a consistent metric
for risk aggregation, performance measurement, and asset and business allocation. The objective
of a risk-adjusted performance measure is to define a consistent metric that spans all asset and
risk classes, thereby providing an ‘apples to apples’ benchmark for evaluating the performance of
alternative business opportunities. Economic capital management tools generally require a
bottom-up approach for its estimation, in order to support a practical, risk-sensitive capital
allocation methodology.

Regulatory capital and economic capital have differed substantially in the past, particularly for
credit risk (and also regulatory capital under Basel I did not cover operational risk). The new
Basel II Accord for banking regulation has introduced a closer alignment of regulatory capital
with economic capital and current best-practice risk management by introducing operational risk
capital and allowing the use of internal models for both credit risk and operational risk. This
results in minimum capital requirements that are more risk-sensitive. Finally, with its three-pillar
foundation, Basel II focuses not only on the computation of regulatory capital, but also on a
holistic approach to managing risk at the enterprise level.

References
Artzner, P, Delbaen, F, Eber, J-M, and Heath, D (1999) Coherent measures of risk, Mathematical
Finance, 9(3), pp. 203–228.

Basel Committee on Banking Supervision (1988) International convergence of capital


measurement and capital standards. Available at http://www.bis.org

Basel Committee on Banking Supervision (1995) Basel capital accord: treatment of potential
exposure for off-balance-sheet items. Available at http://www.bis.org

Basel Committee on Banking Supervision (1996) Overview of the amendment to the capital
accord to incorporate market risk. Available at http://www.bis.org

Basel Committee on Banking Supervision (2003a) The new Basel capital accord: Consultative
document. Available at http://www.bis.org

Basel Committee on Banking Supervision (2003b) Trends in risk integration and aggregation.
Working paper, available at http://www.bis.org

Basel Committee on Banking Supervision (2004) International convergence of capital


measurement and capital standards: A revised framework. Available at http://www.bis.org

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 28
The PRM Handbook – III.0 Capital Allocation and RAPM

Denault, M (2001) Coherent allocation of risk capital, Journal of Risk, 4(1), pp. 1–34.

Gouriéroux, C, Laurent, J-P, and Scaillet, O (2000) Sensitivity analysis of values at risk’, Journal of
Empirical Finance, 7(3–4), pp. 225–245.

Kalkbrener, M, Lotter, H, and Overbeck, L (2004) Sensible and efficient capital allocation for
credit portfolios, Risk, pp. S19–S24.

Koyluoglu, H, and Stoker, J (2002) Honour your contribution, Risk, April, pp. 90–94.

Kupiec, P, (2002) Calibrating your intuition: Capital allocation for market and credit risk.
Working paper 99/02, IMF, available at http://www.gloriamundi.org/picsresources/pkcyi.pdf

Matten, C (2000) Managing Bank Capital. Chichester: Wiley.

Mausser, H, and Rosen, D (2004) Scenario-based risk management tools. In S W Wallace and W
T Ziemba (eds), Applications of Stochastic Programming. Philadelphia: SIAM.

Merton, R C, and Perold, A F (1993) Theory of risk capital in financial firms, Journal of Applied
Corporate Finance, 6(Fall), pp. 16–32.

Perold, A F (2001) Capital allocation in financial firms. Harvard Business School Working Paper
98-072, available at http://papers.ssrn.com/paper.taf?abstract_id=267282

Praschnik, J, Hayt, G, and Principato, A (2001) Calculating the contribution, Risk, 14(10), pp.
S25–S27.

Saita, F (2003), Measuring risk-adjusted performances for credit risk. Working Paper 89/03,
March, available at http://www.sdabocconi.it/it/ricerca/pubblicazioni/dir2003.html

Smithson, C (2003) Economic capital - how much do you really need?, Risk, November, pp. 60–
63.

Tasche, D (2000) Conditional expectation as quantile derivative. Working paper, Technische


Universität München.

Tasche, D (2002) Expected shortfall and beyond. Working paper, Technische Universität
München.

Copyright © 2004 A. Aziz, D. Rosen and the Professional Risk Manager’s International Association 29

Anda mungkin juga menyukai