• No results found

How To Write A Credit Risk Book

N/A
N/A
Protected

Academic year: 2021

Share "How To Write A Credit Risk Book"

Copied!
19
0
0

Loading.... (view fulltext now)

Full text

(1)

CRM validation

Jérôme BRUN, Head of model validation (market risk)

June 2011

(2)

 3 amendments on the trading book to be implemented in 2011

Incremental Risk Charge (IRC) & Comprehensive Risks Measure (CRM): capture P&L impact of rating migration and default on credit excluding securitisation

Standardised approach on securitisation perimeter (inspired by banking book)

Stressed VaR: value-at-risk on stressed scenarios

 VaR, SVaR, IRC and CRM are all fractiles of P&L distributions. As such, they require:

 A perimeter of deals

 Scenarios for the risk factors

 Repricing of the deals in the perimeter in each scenario

 Computation of the fractile of the P&L distribution just built

 Agenda

 IRC vs. CRM

 Focus on the standardized approach used to compute the CRM floor

 Validation of CRM

(3)

Today Target (Q4 2011)

“Specific risk surcharge” =

VaR x [1+]

VaR x [3+]

(including “general risk” and “specific risk”) Credit and equity

trades

VaR x [3+]

(including “general risk” and “specific risk”) Trades not generating specific risk Stressed VaR x [3+] CRM Trades not generating

specific risk Equity trades Correlation

trading portfolio (CDO + CDS) Other credit (bonds and CDSs on corporate and sovereign issuers) IRC

Credit and equity trades “Securitisation” (= all tranches except correl) Standardised approach (RBA, SFA, deduction)

(4)

IRC vs. CRM – general principles

P&L impact of rating migration and default

 1-year capital horizon  99,9% confidence level

 3-month minimum liquidity horizon

• Can rebalance positions at every liquidity horizon with “Constant level of risk assumption”

Perimeter

 CRM  Correlation trading portfolio:

• CDOs on liquid corporate portfolios – not CDO² and LSS • CDSs used to hedge the CDOs

 IRC  Other credit (excluding securitisation)

Risks covered

 IRC & CRM: migration and default

 CRM: specific price risks (spread, recovery and correlation volatility)  CRM impact subject to floor

 8% of standardized approach

IRC inconsistent with VaR:

(5)

IRC vs. CRM – implementation

 Start with simulations of correlated migrations on all issuers in book

 Large number of scenarios required (99.9% = every 1000 scenarios)  need up to 1 million simulations !

 In each scenario, get migration for each issuer

• Scenario #1 : Peugeot: AAA, Renault: AAAA, Ford: BBC • Scenario #2 : Peugeot: AA, Renault: AAAAA, Ford: BBA • Etc.

 For IRC, in each scenario:

 All issuers are subject to either of:

• Downgrade / Upgrade  Can be translated into spread move (∆S), assuming a fixed spread per rating class E.g. AAA = 20bps, AA = 40bps, A = 90bps, BBB = 140 bps, etc.

• Default

 Compute P&L impact of resulting market moves on the IRC perimeter • Need to reprice full perimeter 1 million times

 CRM specificities (for CDOs):

 All market moves (incl. volatility of spread of rating classes, of recovery, of credit correlation) are added to spread moves resulting from migrations

 These market moves must account for correlation between “credit” and market

(6)

IRC & CRM – implementation

 IRC/CRM market scenarios

(7)

Standardised approach for the CRM floor

 For all (re-)securitisations, capital charge = 100% of Standardised approach

 Transitory measure (till Q4 2013)  capital = Max(capital on longs; capital on shorts)

 CRM floor also based on standardised approach - but a fraction of it !

 capital = 8% * Max(capital on longs; capital on shorts)

 Capital charge = Exposure

x

Risk Weight

RBA Rated securitisation ? Unrated securitisation ?  SFA  Tranparency  deduction STANDARD Single-name CDS

(8)

Standardised approach – Focus on the components

 Exposure: not always equal to the maximum MtM loss on the asset

 Cf. guidelines QIS 4.1 (differ from those of QIS4)

 MtM for a note/bond

 For a swap/CDS format, “MTM of associated bond”

 Ntl + CDS MTM for CDS long risk

 Ntl - CDS MTM for CDS short risk

 Exposure <> maximum loss for a swap short risk !

 Risk-weight of a tranche

Essentially driven by external rating, when available (RBA = Ratings-Based Approach) • Around 1% when AAA, but falls to 100% when <BB-

 Otherwise, use Supervisory Formula Approach (SFA), based on ratings within underlying portfolio • Usually SFA uses only internal ratings – a more liberal use could be possible (blend internal/external ratings)

 Not much arbitrage between RBA and SFA (when data is available to compute both)

 For rated single-name exposures (contributing to CRM floor), risk-weight similar to tranches

 The Max loss is computed from the worst case scenario

 Worst case ?

• When long risk  All credits default with 0 recovery • When short risk  All credits become risk-free

 As a consequence:

• For a bond or a CDS long risk, the Exposure is already equal to the Max loss  cap has no impact

• For a CDS short risk, Max loss = MtM + PV of risk-free spread payments (~ MtM when contract spread is small)  the cap may be useful for large risk-weights (in particular for deductions)

(9)

Standardised approach – Netting

 Limited netting : for a long/short, charge is driven by M = Max (Charge(long), Charge(short) )

1. Identical exposures  0% * M

2. format mismatch (bond vs. CDS)  20% * M

3. maturity and/or currency mismatch  100% * M

 In all other cases, the charge of the long/short is the sum: Charge(long) + Charge(short)

 Open questions

 coupon mismatch ?

 credit index = sum of its components ?

 Sum of tranches = portfolio ?

 Floor computation

1. Compute individual charges for all assets

2. Do the netting as above for all long/shorts in the portfolio, the resulting capital is called netted capital.

3. Add up capital for all (non-netted) long assets and all netted capital get L = total long charge

4. Add up capital for all (non-netted) short assets and all netted capital get S = total short charge

(10)

Standardised approach – Example of a NBT

 NBT of CDOs

 bond referencing some CDO tranche

 CDS buying protection on the same tranche

 Common notional = 100

 MtM assumptions: Bond = 80%, CDS = 20%  Exposures vs. Max Loss

 Bond  Exposure = Max Loss = 80

 CDS  Exposure = 80, Max Loss = 20 (neglect PV of spread payments)

 Tranche rating = BB-  risk-weight = 52%  Resulting charges:

 Bond charge = 80 * 52% (capped @ 80) = 41.6

 CDS charge = 80 * 52% (capped @ 20) = 20  cap is hit  Max charge = bond charge = 41.6

(11)

Summary of measures based on fractile

Perimeter Captured risk Model

Capital

horizon Fractile Multiplier

Pro-cyclical

VaR

Trading book Market risks Historical or Monte-Carlo

(shocks over past year) 10 days 99% 5 Yes

SVaR

Trading book Market risks Historical or Monte-Carlo

(shocks over worst year) 10 days 99% 5 No

IRC

Vanilla credit book Rating migration

& Default Correlated migrations 1 year 99.9% 1 No

(12)

Validation – an example of governance

 Risk department does modelling + implementation

 With FO support (trading, quants, IT…)

 Initial modelling choices are validated by expert committees

 quants

 experts assessing compliance with regulatory expectations

 Audits

 Internal audit

 Regulator audit

 Permanent governance

 Regular calibrations (Q/W/S/Y)

(13)

Validation – the challenge

 Rumour has it that CRM Internal Model < CRM floor for most large correlation dealers

 In this case the validation only allows to charge 8% of the standardised approach rather than 100%

 … i.e. get a validation of the internal model only to have the right to override it by the std approach

 Is the floor computation part of the validation ?

 Standardised approach applied to CRM perimeter requires clarifications…

 Netting rules are designed for traditional securitization

 ABS hedged by CDS on that very ABS – no ambiguity

 Ford bond with maturity 5 Oct 2013 hedged by standard CDS with maturity 20 Sep 2013 ?

 Series of CDSs on Ford with different maturities, coupons, and currencies ?

 Delta hedge of CDO tranche with single-name CDSs ?

 Use of SFA methodology traditionally relies on internal ratings

 Makes sense on banking book, as all credits are also clients and therefore they are internally rating

 On the trading book the credits traded are not necessarily clients of the bank

 A strict application on SFA penalizes external ratings

 … but there might be a tolerance, among others to be consistent with computation of CRM IM

(14)

Validation – the perimeter

 Liquidity test

 Basel : All reference entities in the underlying portfolio should be liquid

 … but the US draft only requires “All or substantially all”

 Liquid = a 2-way CDS market exists and a trade can be executed within the day

 In practice, very few names are illiquid

 Strict application of Basel would exclude many trades  100% of standardized approach + hedge remains as open position in CRM

 Test allows to exclude CLOs where most reference entities are illiquid

 Exclusion of LSS, options on tranches and CDO²

 How about their hedge ?

• The hedge of these excluded products is also excluded from the CRM perimeter

• In practice, these products are risk-managed along with “vanilla” CDOs (mismatch between the regulatory “correlation trading portfolio” and the real-life “correlation desk”)

• In these conditions, how to identify the hedge of the excluded products ?

• Keeping the hedge in CRM perimeter creates an open position, an thus a CRM explosion !

 Options on tranches do not exist

• What about the callability features, e.g. callable CDO ?

 Should we exclude a transaction based on its booking or based on its confirmation, when these differ ? • Conservative bookings may be aggressive from a capital viewpoint

(15)

Validation – the scenarios

 Scenarios combine migrations + defaults (as in IRC) and market moves (as in VaR)

 Consistency with IRC migration/default engine: nice to have, or mandatory ?

 Spread moves result from the joint effect of migrations & market moves specific to CRM

 Cf. before: may use cohort spreads as tool to ensure consistent approach

 Given the fractile (99.9%) and horizon (1y), cannot use historical scenarios

 Would require thousands of years of history

 Therefore need to specify dynamics on the market parameters to generate scenarios

 Choice of dynamics for a given price risk

 Historical dynamics (as opposed to risk-neutral used when pricing)

 Balance between realism, robustness, stability and conservatism • Examples : Spread drift, Spread vol

 When a parameter is not observable on the market • Parameter calibration done at expert level

(16)

Validation – the price risks to keep

 Credit – the usual suspects

 Credit spreads: observable, standard dynamics (lognormal)

 Credit correlations: daily historical series for the base correlation. Non-trivial dynamics as bounded

 Recovery rates: distinguish between:

• Pricing recoveries : not observable. Noise around current “market” recovery ?

• Default recoveries : observed on the recent IG defaults. Split between soft and hard defaults  U-shaped distribution

 Credit - the different basis

 Bond vs. CDS : irrelevant

 Index vs. CDS : observable, but unusual dynamics (strong mean-reversion with jumps)

 Bespoke vs. index = to be accounted for when pricing a bespoke tranche

 Non-credit

 FX, rates: not explicitly asked, and very liquid, so the CRM horizon is irrelevant.

 Should hedge cost be included, since we claim to benefit fully from dynamic hedging ?

 Huge correlation matrix

 Not all correlations matter

 Credit/market correlation ?

• Actually this is the correlation between the factors driving the migrations/defaults and those driving the market moves • Correl among Credit correl often estimated from equiity returns. Fine

(17)

Validation – the valuation

 Computational burden

 More than 100 000+ simulations

 1000 tranches

 1s to 5s per tranche with usual pricers

 1000 CPUs

 More than a week of computation time…

 How to reduce computational burden ?

 Optimized pricers  focus on CDS curve bootstrapping and computation of bespoke correlation

 Taylor expansion  Cross gammas…

 Pre-computed prices  large multi-dimensional arrays + an interpolation methodology

 Challenge is to demonstrate that pricing is accurate enough

 Actually it is not so much the pricing in itself but more the difference of prices (P&L)

 We want a fast pricing… that remains accurate enough in stressed conditions (99.9% scenarios)

 What is a right level of accuracy ?

• Well-defined when pricing for accounting reasons (as close as possible to “fair value”)

• Here the impact of pricing inaccuracy should be compared to the uncertainty around the definition of a “99.9% scenario”

 What about CDSs ?

(18)

Validation – the 99.9% fractile

 Number of scenarios required

 1,000,000 scenarios require 10 times more CPU than 100,000 scenarios  why bother if number is overridden anyway by the floor ?

 Only ensure that CRM IM + large confidence interval remains < CRM floor

 Computation of the fractile

 Contribution of different desks require good convergence  vital for IRC, but CRM involves essentially one desk

 Stability increased when computing contribution from average over scenarii around 99.9.% scenario

 Example: 1,000,000 scenarios, ranked by increasing P&L. CRM is by definition the P&L on the 1000th scenario. Look for unique X > 1000 such that the CRM is also the average of the P&Ls on the first X scenarii

 Backtesting

 At less severe fractiles and/or shorter horizons ?

 Or just make sure the few observed realisations are < CRM ?

 Stress testing

 Of the CRM perimeter

• Pre-defined stress tests : apply all observed market moves on stressed periods defined by the regulator • Ad hoc

(19)

Next steps

 Fundamental review of the trading book

 Joint work between regulators and industry

 Long-term

 In the interim, will have to cope with CRM floor

 make it your friend

 E.g. same ratings and LGD for CRM IM and SFA in CRM floor

 Benchmarking was an anti-floor weapon

 Floor was supposed to give extra comfort with a ”simple” methodology

 Benchmarking of CRM on stylised portfolios was an alternative suggestion from the industry

 Even though the floor is here to stay, benchmarking can still help get validation

 As long as floor remains, no incentive to optimize CRM model – this would increase validation risk for a constant level of capital

• Constant level of risk • Dynamic hedging

References

Related documents

• Cost savings - according to value for money (VfM) assessments, the benefits associated with lowering the overall risk of a project (and hence expected cost) through

In order to compare the computational costs of cuTauLeaping with respect to a standard CPU-based implementation of the original tau-leaping algorithm, we carry out different batches

Μολαταύτα, η αναφορά στο μουσικό του χάρισμα, αν και βραχυλογική, αποτυπώνεται δυναμικά (ᾄσας δεξίως καὶ κιθαρίσας κατὰ τὸν νόμον τῆς τέχνης).

In this paper, we investigate the impact of Forward Error Correction (FEC) codes namely Cyclic Redundancy Code and Convolution Code on the performance of OFDM wireless

Committees of the States (Ministers), however, should have the flexibility within the common framework to operate the outsourcing policy within their respective areas of

Resumo – O objetivo deste trabalho foi avaliar o efeito de dietas suplementadas com monensina ou produtos à base de própolis, nas populações de protozoários ciliados no

Turn the spindle knob (2) left or right to decrease or increase the volume level. Switching on your radio automatically allows you to set the volume level without having to select

You'll study modules including Animal Physiology, Developmental Psychology, Individual Differences, Biological Psychology, and Molecular Biology