Residential Mortgage Modeling

By | Research

Credit risk analysis is complex because data is always scarce and not fully representative of the future. Therefore no single data set will suffice and multiple sources of information (varying from internal data to expert input) are needed.

A dedicated statistical framework is required to combine these multiple sources coherently and an accomplished analyst will need to account for uncertainty of the model inputs.

At OSIS, we have created a framework based on Bayesian Statistics, which in essence is more conservative than any traditional approach. However, if many data sources of good quality are used, it does not necessarily lead to more conservative outcomes. Furthermore, we have used the same theoretical distributions that were used in vendor models like Credit Metrics and the Basel formula, to make our results comparable, however “the devil remains in the data.”

We distinguish several ways credit risk is measured in the financial industry, which seem very different, but in essence are very similar:

  •  RegCap or Ecap approach: this is a loss based on accounting principals (Basel 3) or spread movements (Solvency II) measured over a one year horizon, irrespective at what point of the cycle, at a high confidence interval of 99.5% or higher.
  • IFRS9 loan loss provision: a cumulative loss on the lifetime of a loan at a 50% confidence level.
  • Stress testing: losses projected over several years into the future, conditional on certain macro scenario’s at a 50% confidence level.
  • Securitization: a cumulative loss at multiyear horizon, conditional on a macro scenario (OSIS) or unconditional (CRA) at different confidence levels dependent at the tranche level and tranche maturity


Therefore the statistical approach of these measurements could (and should) use the same building blocks, leading to consistent and comparable outputs.

Our model framework is based on one model engine, using different modules for each measurement. Furthermore the model engine is automatically updated once new loan level reports are available. Therefore our framework can be used to compare the various approaches of supervisors and check their consistencies.

As a use case, we calibrated a residential mortgage stress test model based on loan level data from Saecure, a label of RMBS transactions from Aegon in the Netherlands. In the model we added long time-series based on NHG guarantee pay outs, house price and unemployment evolutions in the Netherlands.

We compared the results of our calibrated model with a stress test model from DNB. Our standard model showed very similar results (the starting point of losses in the Saecure portfolio were substantially lower) to the DNB model taking their stress scenarios.

We then included unemployment and house prices as endogenous variables in the model and ran over 2 million scenarios, the results were again very similar. This could be perceived as a validation of the OSIS model and would mean that using the same model for different measurements, should lead to acceptable results for that same supervisor. Then we re-used the OSIS model and ran the losses at one year horizon with a 99.9% confidence interval in order to compare the results with regulatory capital calculations.

The model measured losses at a confidence interval of 99.9% of 0.37%. For the same risk Aegon Bank N.V. has to put aside 4.90% of capital (35% (risk weight) times 14% (tier 1 capital)). This is much higher than shown in the model and is not consistent with the stress test exercise from DNB. Once more standardized data in large volumes become available, it is easier to perform these exercises, supporting a better understanding of credit risk and consequently a better functioning financial market.

Please contact us for the full, unedited, graphical version of this report.


Dynamic analysis of SME securitization tranches

By | Research


We investigate the credit risk of tranches linked to the performance of large pools of loans to small and medium size companies. We aim at identifying the key risk drivers and separating first order from second order effects. Long consistent histories of loan level data are not available in the public domain and individual bank’s histories often barely cover more than one or two economic cycles. Hence, most SME portfolio models have required expert input. In the absence of representative data, model parameters are highly uncertain an aspect which may not always be appreciated by practitioners. Including model and parameter uncertainty generally increases risk. On the other hand, available SME performance data often imply model parameters like the asset correlation or expected loss given default significantly below industry assumptions as expressed for instance in the Basel bank capital framework. Therefore, it is not a priori clear that our data driven approach featuring parameter uncertainty will necessarily be more conservative than common industry benchmarks and there have been arguments that SME loans are unduly penalized by Basel or credit rating agencies. We pursue a hybrid approach where we aim to extract as much information from available loan level data as possible while supplementing missing information through longer term aggregated data and expert guidance. Expert assumptions are subjected to a sensitivity analysis, multi-year ABS tranche loss distributions capturing forecast and parameter uncertainty are calculated and conditioned on macro economic variables as required for stress testing.

To get the full unabridged version, please contact us at

Top down stress testing of US residential mortgages

By | Research


We design and analyze a model for aggregate performance measures of US residential mortgages based on publicly available information. The objective is to implement a top down stress testing framework at the macro level that can be compared with bottom up stress testing approaches that model loan performance at the loan level. We consider a class of simple auto-regressive time series models similar to those used by US banking supervisors under the Capital and Loss Assessment under Stress Scenarios (CLASS, Hirtle 2014).


First, we identify suitable aggregate performance measures of mortgage credit risk together with a set of macroeconomic variables. All data are publicly available for automatic download. Second, we implement a set of functions that explore and visualize the data including tests for stationarity and cointegration. Third, we address the difficult problem of creating parsimonious and robust models for which practitioners can develop their own intuition. While the selection of a small number (1-3) of macro factors is often based on expert judgment, here we aim to implement data driven selection methods that can help experts in variable selection and model design. Fourth, we backtest the suite of models by calibrating against pre-crisis data and explore the stability of model selection and model parameters. Fifth, we highlight the suitability of Bayesian statistical methods to better capture model and forecast uncertainty.

Literature Overview

The use of time series models for top down stress testing of bank performance data has received much attention from central banks and supervisors to investigate macro prudential stability. Recent contributions in the US include Hirtle (2014) and Kapinos (2014) and further references therein. The footnote of Kapinos (2014) cites a number of country studies conducted by central bank staff across Europe. Some central banks have published explicit models for residential mortgages or secured household loans, however, each selecting a different set of macroeconomic variables (e.g. the RAMSI model of the Bank of England, Burrows et al. 2012). For US data and the modeling of net charge-offs of residential mortgages, Hirtle 2014 consider the annual change in house prices as sole explanatory variable. They use commercial property price changes for commercial real estate loans and model all other loan asset classes with the annual change in unemployment. Other researchers have used more than one macrovariable. For instance, Bermingham(2011)model Irish mortgage performance using household net worth, unemployment and the mortgage debt service payment whereas the RAMSI model of the Bank of England uses income gearing, undrawn equity and the unemployment rate to model the probability of default for secured loans to households (Burrows 2012). Other macro variables used to model aggregate mortgage default are the loan to value ratio of first time buyers (Whitley 2005 in the UK), short term interest rates, GDP and the mortgage interest rate spread over a long term government bond yield (Alves 2012 in Portugal), or the growth of mortgage lending (Blanco 2012 in Spain). A pre-crisis analysis of aggregate US mortgage delinquencies by the IMF (GFSR 2008-2) used a residential property price index and a measure for lending conditions. We conclude from this brief overview that there is little consensus on how to design a top downstress testing model for residential mortgages. Different stress tests focus on different economic scenarios warranting different model specifications and the availability of suitable macro variables differ by country. For example, the loan to value ratio for first time buyers is a common measure of credit availability in the UK, but is not available in many other European countries. From a statistical perspective it is unlikely that the different model specifications suggested in the stress testing literature would perform equally well. In the following, we aim at establishing a framework for model selection and back testing that quantifies the statistical performance of different models, helps with model selection and validates the model based on out-of-sample predictive performance over a multi-year time horizon.

November 2014

To get the full version of this document, please contact us at

Top Down Robustness Check of the 2014 EBA Stress Test

By | Research

Complementary Online Stress Test Analytics

OSIS provides credit data validation, visualization and modeling tools and services. An interactive analysis of the 2014 EU-wide stress test results from EBA (EBA ST) can be found on our website This article supplements the freely available online tool to analyze the EBA stress test data. A number of interactive visualizations are available to compare the stress test results across banks, countries and exposure classes.


In this paper we motivate and explain our online analysis of the 2014 EBA ST and show how banks and investors can use the published data to better understand bank balance sheets with a particular focus on credit risk. The results of the EBA ST provide unprecedented insight in the sensitivities of banks’ credit risk to macro economic changes. With up to 12,000 data points per institution it provides the most detailed stress test disclosure to date, offering unique insight into the bank balance sheets which are generally considered too opaque for a detailed credit risk analysis from the outside. The EBA ST was a bottom up exercise allowing the 123 participating banks to use their own models to calculate projected losses and risk weighted assets. We document the significant variations between banks with regards to the risk parameters and sensitivities used to arrive at the published development of their capital ratios under stress. Such variability has been known for the bank’s own determination of risk weighted assets(RWA) under the Basel internal ratings based approach and our analysis shows that the large variability for RWA persists. While the quality assurance documentation from the ECB and EBA emphasizes consistency between institutions, our analysis suggests that large differences remain and that investors are well advised to conduct their own top down analysis with consistent assumptions across institutions. We estimate the point-in-time probability of default (PD PIT) and regress it against the given macro scenarios resulting in a library of tens of thousands of credit risk models by bank, country and exposure class. We draw the analogy between the asset correlation in the Basel framework for RWA and the macro sensitivities used in the stress test. In Basel, the asset correlation is given by supervisors and the same correlations are applied across all countries worldwide. The correlation values prescribed are generally considered conservative to capture some of the variability between countries and model uncertainty more generally. In the EBA ST, the role of the asset correlation is played by the macro sensitivities which are equally hard to determine accurately from historical data requiring long time series that are often lacking. In the EBA ST, the macro economic scenarios vary country by country and we find that the country-averaged sensitivities used by the banks vary by more than one order of magnitude. Finally, we use the estimated models to recalculate the Core Tier 1 ratios based on more homogenous assumptions while still accounting for the bank’s individual starting points. Our results suggest that some large institutions would have failed the stress test had they not used more aggressive assumptions compared to the average of their peers by country and exposure class. If one believes that banks were on average accurate in their impairment projections, then the total capital shortfall in the banking system will only differ moderately using more homogenous top down assumptions. If, however, the macro sensitivities are used at a required level of conservatism (e.g. using parameters at the 75% or 90% credibility level), then materially larger capital shortfalls would have been revealed.

Introduction: The 2014 EU-wide stress test

The joint 2014 stress test by EBA and the ECB forms part of the Comprehensive Assessment (CA) which includes the Asset Quality Review (AQR) and provides an unprecedented level of disclosure on the quality and vulnerability of the balance sheets of the largest banks in Europe. A major objective of the stress testing exercises is the fostering of market discipline, i.e. allowing market participants to conduct their own analysis of the banks’ balance sheets as we attempt in this work. Our contribution aims to help banks and investors to conduct a like-for-like comparison of the published results. Our approach is based on public data, but is otherwise similar to quality assurance procedures conducted by the supervisory authorities which, however, have remained unpublished to date. We conduct a partial robustness test with regards to credit risk and provide tools for a coherent top down analysis of loan impairments, which for a commercial bank is by far the largest individual source of capital erosion under stress.

November 2014

To get the full unabridged version, please contact us at

The Learning Basel Formula: decreasing uncertainty by consistently using bank performance data

By | Research

In the debate on regulatory capital many people talk about capital increase, a few talk about facts. In this article we looked at a large set of historical default data and showed how regulators and market participants could learn from it.

We updated the assumptions in the Basel formula and PD estimates by the banks with historical data of PECDC in a Bayesian framework.

The results are surprising and hopefully will shift the debate to a better understanding of bank credit risk: a more sustainable solution for preventing a next banking crisis than just increasing capital levels.

The learning Basel formula

5 steps to get to a good understanding about a banks credit risk

By | General, News, Research

There is much quality difference between banks pillar 3 reports and many publishing banks ignore important CRD rules. In order to help bank analysts to interpret pillar 3 reports and to ask for additional need-to-know information, we have defined 5 steps to analyse credit risk in this white paper. We briefly describe the concept of risk, economic capital, the requirements of Basel and the CRD and what is in our view the most important information to get your hands on.