Credit Models: Basel & Solvency 2
Credit risk analysis is complex because data is always scarce and not fully representative of the future. Therefore no single data set will suffice and multiple sources of information (varying from internal data to expert input) are needed.
A dedicated statistical framework is required to combine these multiple sources coherently and an accomplished analyst will need to account for uncertainty in the model inputs.
One consistent framework for many applications
At OSIS, we have created a framework based on Bayesian Statistics, which in essence is more conservative than any traditional approach. However, if many data sources of good quality are used, it does not necessarily lead to more conservative outcomes (see our use case where we updated the Basel 2 model with the default frequencies of 14 banks).
In our modeling approach we distinguish several ways credit risk is measured in the financial industry, which seem very different, but in essence are very similar:
- RegCap or Ecap approach: this is loss-based on accounting principles (Basel 3) or spread movements (Solvency 2) measured over a one year horizon, irrespective at what point of the cycle, at a high confidence interval of 99.5% or higher.
- IFRS9 loan loss provision: a cumulative loss on the lifetime of a loan at a 50% confidence level.
- Stress testing: losses projected over several years into the future, conditional on certain macro scenario’s at a 50% confidence level.
- Securitization: a cumulative loss at multi-year horizon, conditional on a macro scenario (OSIS) or unconditional (CRA) at different confidence levels dependent at the tranche level and tranche maturity.
Therefore the statistical approach of these measurements should use the same building blocks in order to get to consistent and comparable outputs.
In credit risk, there is no single historical data set which will completely calibrate a credit model. Historical data are scarce and seldom fully representative of what the future holds. Therefore on one hand it is important that different data sets can be combined in a coherent framework, on the other hand the ultimate model results in an intuitive tool that can be understood (even leaving the maths aside) and where the intuition of the user can be organised into an auditable process.
At OSIS, we have chosen the Bayesian statistical approach that mathematically describes the learning process. It starts off with a hypothesis (prior estimate), based on external data and/or expert opinion, which is then modified when presented with internal data.
There are essentially three steps involved: (1) determining a prior estimate of a parameter in a model on the form of a confidence distribution, (2) finding an appropriate (likelihood) function on the observed (internal) data and (3) calculating the posterior (the ultimate parameter for the model) by multiplying (1) and (2) and then normalising so that the result is a true distribution (area under the curve is 1). The hypothesis or prior estimate can be based on the combination of expert knowledge and external data that is then combined with the observed (internal) data.
The ability of the Bayesian statistical approach to add different data sets in a coherent fashion is not the only strength:
- It accounts for parameter uncertainty, less data or conflicting prior versus internal data automatically means more conservative outcomes. In other words the validation outcome of the model is reflected in the model output;
- Machine-learning or automatic updating: once new information becomes available the model gets automatically recalibrated keeping the model up-to-date and saving operational risk and costs.
LoanPilot™ has a modular set up and therefore can be used for many applications at single name level, portfolio level, on a one or multi-year horizon and with or without macro factors.
This guarantees consistency amongst the different outputs, is more cost efficient and leads to lower operation risk and model risk.
Selected Technical Features
- LoanPilot™ follows a Bayesian statistical approach and treats all model parameters as uncertain. During calibration, the user can either “let the data talk” with minimal subjective input or use informed ‘priors’; adding information based on external data sources and/or expert opinion.
- LoanPilot™ provides a point-in-time (PIT) risk assessment where long term through-the-cycle (TTC) parameters can be specified as input.
- The key outputs are credit risk performance measures (e.g. PD, LGD, EAD, IFRS 9 loan loss provisions and economic capital) over a one year or multi-year horizon conditional upon macro economic scenarios.
- The LoanPilot™ models are self-learning. This means that the models get automatically updated along with the reporting cycle of the institution. This keeps the model up-to-date (one year earlier than common practices), decreases operational risk and saves costs.
- LoanPilot™ is very user friendly; inputs and outputs are visualised and it is therefore very intuitive. The calculation time is kept to a minimum. The model can be calibrated on the fly i.e. during a model committee meeting where results can be discussed immediately.
- The user has access to all model details and, importantly, all the data sets used for calibration and additional expert judgment (if applied).
- LoanPilot™ can be used together with market data for IFRS fair valuation, spread risk in Solvency II and in combination with vendor cash flow models to use it for the same purposes as the valuation of ABS notes.
- The model suite is available online and accessible through desktop, laptop or tablet computers.