Credit risk analysis is complex because data is always scarce and not fully representative of the future. Therefore no single data set will suffice and multiple sources of information (varying from internal data to expert input) are needed.
A dedicated statistical framework is required to combine these multiple sources coherently and an accomplished analyst will need to account for uncertainty of the model inputs.
At OSIS, we have created a framework based on Bayesian Statistics, which in essence is more conservative than any traditional approach. However, if many data sources of good quality are used, it does not necessarily lead to more conservative outcomes. Furthermore, we have used the same theoretical distributions that were used in vendor models like Credit Metrics and the Basel formula, to make our results comparable, however “the devil remains in the data.”
We distinguish several ways credit risk is measured in the financial industry, which seem very different, but in essence are very similar:
- RegCap or Ecap approach: this is a loss based on accounting principals (Basel 3) or spread movements (Solvency II) measured over a one year horizon, irrespective at what point of the cycle, at a high confidence interval of 99.5% or higher.
- IFRS9 loan loss provision: a cumulative loss on the lifetime of a loan at a 50% confidence level.
- Stress testing: losses projected over several years into the future, conditional on certain macro scenario’s at a 50% confidence level.
- Securitization: a cumulative loss at multiyear horizon, conditional on a macro scenario (OSIS) or unconditional (CRA) at different confidence levels dependent at the tranche level and tranche maturity
Therefore the statistical approach of these measurements could (and should) use the same building blocks, leading to consistent and comparable outputs.
Our model framework is based on one model engine, using different modules for each measurement. Furthermore the model engine is automatically updated once new loan level reports are available. Therefore our framework can be used to compare the various approaches of supervisors and check their consistencies.
As a use case, we calibrated a residential mortgage stress test model based on loan level data from Saecure, a label of RMBS transactions from Aegon in the Netherlands. In the model we added long time-series based on NHG guarantee pay outs, house price and unemployment evolutions in the Netherlands.
We compared the results of our calibrated model with a stress test model from DNB. Our standard model showed very similar results (the starting point of losses in the Saecure portfolio were substantially lower) to the DNB model taking their stress scenarios.
We then included unemployment and house prices as endogenous variables in the model and ran over 2 million scenarios, the results were again very similar. This could be perceived as a validation of the OSIS model and would mean that using the same model for different measurements, should lead to acceptable results for that same supervisor. Then we re-used the OSIS model and ran the losses at one year horizon with a 99.9% confidence interval in order to compare the results with regulatory capital calculations.
The model measured losses at a confidence interval of 99.9% of 0.37%. For the same risk Aegon Bank N.V. has to put aside 4.90% of capital (35% (risk weight) times 14% (tier 1 capital)). This is much higher than shown in the model and is not consistent with the stress test exercise from DNB. Once more standardized data in large volumes become available, it is easier to perform these exercises, supporting a better understanding of credit risk and consequently a better functioning financial market.
Please contact us for the full, unedited, graphical version of this report.