Hello, and welcome to my blog series! I actually have all the time desired to share my thoughts and insights on credit risk within the banking industry. As a Junior Author and Data Scientist/Quant, I could not have the identical level of experience as a few of my peers, but I’m desperate to learn and share my perspectives with others. I actually have a deep passion for statistics (machine learning), quantitative analytics, regulatory points of banking and application to credit modelling.

On this series, I’ll be exploring various topics related to credit risk & statistics, including different models and techniques utilized by banks, current trends and challenges within the industry, and the impact of regulations and standards similar to Basel II, III, IV and IFRS9 (definitions to follow). It needs to be mentioned that I’ll focus so much on the banking perspectives in Europe (European Banking Authority — EBA / European Central Bank — ECB) and Africa (South African Reserve Bank — SARB) as that is where I actually have gained my experience. Nonetheless, the techniques and concepts that will probably be discussed are insights and challenges throughout the world. Join me as I dive into the complex and interesting world of credit risk management in banking.

Recently, Machine Learning (ML) — which is largely statistics on steroids— have turn out to be a debated and hot topic within the credit risk space. The ECB has been actively exploring using ML within the context of banking and finance. In November 2021, the ECB published a report titled “EBA DISCUSSION PAPER ON MACHINE LEARNING FOR IRB MODELS” which provided an summary and questionnaire of the several applications of ML in Internal Rating-Based Models (IRB) — more on this later. The Bank for International Settlements (BIS) have also been exploring various topics in ML along with the ECB and the SARB.

These reports highlighted the potential advantages of using ML techniques, similar to improved accuracy in forecasting, early warning signals for financial stability risks, and more efficient fraud detection. Nonetheless, it also noted a few of the challenges related to using ML in central banking, similar to data quality issues, model interpretability, and potential biases in the info. Using these statistical outputs at the moment are more vital than ever (as is the info feeding into these models).

Overall, the ECB recognizes the potential advantages of using ML in banking and finance, but additionally emphasizes the necessity for careful consideration of the potential risks and challenges related to these techniques. I’ll be exploring general statistical & ML ideas for credit modelling and validation with code & explanation — the benefits and the pitfalls I see within the industry.

W to realize in these posts are to share knowledge with recent data scientists/credit quants entering the workspace, and maybe to people in numerous industries who’re desirous about these applications. Credit Risk is a captivating topic with a combination of quantitative and qualitative ideas. It’s an area where the convergence of statistics, expert opinions, and business objectives occurs — and it’s somewhat not focused on in universities (unlike market risk with repetitive evaluation of options and deriving the Black-Scholes formula).

On the quantitative side, credit risk management (on the banking book) involves statistical modelling, risk assessment, and financial evaluation. Financial institutions use various tools to measure and manage credit risk, similar to credit rankings, credit scoring, and Probability of Default (PD) models. These models provide a quantitative basis for assessing the likelihood of default and the potential losses related to credit risk. These models are specifically utilized in Acceptance, Pricing, Expected Loss (EL), Stress Testing and Regulatory Capital (RC) calculations. We are going to get to every of those in the approaching posts (in addition to discussing the model lifecycle of every — development, validation and maintenance).

On the qualitative side, credit risk management involves aspects similar to industry trends, management quality, and macroeconomic conditions. Credit Quants consider these qualitative aspects when assessing the creditworthiness of a borrower or a portfolio of loans. Additionally they monitor market trends, regulatory changes, and emerging risks to anticipate potential credit risks.

The mixture of quantitative and qualitative evaluation makes credit risk management a difficult and dynamic field. Banks and financial institutions must constantly update their credit risk models and adapt to changing market conditions to administer their credit risk effectively.

Overall, credit risk is a captivating topic with each quantitative and qualitative points that requires continuous evaluation and adaptation to changing market conditions. In-depth knowledge of statistics and risk management are needed as a starter (typically at a Master’s level). Moreover, it has turn out to be far more vital to have the opportunity to effectively code these large calculations that were mentioned earlier. Being acquainted with two or more coding languages (similar to Python, R, SAS, SQL, or C) could be an important asset for a young quant from university entering the job market.

In my discussion, I’ll aim to point the route I took and what university and online courses I’d recommend for young quants to start in these coding languages and to acquire the crucial statistical basics needed for credit modelling with traditional and ML models.

The BASEL Accords are a set of international regulations for the banking industry. These accords were developed by the BASEL Committee on Banking Supervision (BCBS), which is made up of central banks and regulatory authorities from all over the world. The importance of the BASEL Accords lies of their ability to advertise financial stability by ensuring that banks have sufficient capital to face up to economic downturns and financial crises. Major banks across the globe follow these accords, which function the elemental basis for many credit models in banks.

There are three fundamental BASEL accords:

- BASEL I:

This accord was issued in 1988. The accord focused on three pillars. The BCBS established a set of regulations to make sure that banks maintain sufficient capital to meet their financial commitments and withstand financial and economic pressures. : The primary pillar requires banks to keep up a minimum amount of capital, based on a percentage of their Risk-Weighted Assets (RWAs), to soak up Unexpected Losses (UL). The upper the danger of a bank’s assets, the more capital it must hold. This was the fundamental objective of BASEL I. Banks were required to keep up a minimum Capital Adequacy Ratio (CAR) of 8%, calculated as a percentage of their RWAs. The CAR is the ratio of a bank’s capital to its RWAs, where RWAs are calculated by assigning different risk weights to different categories of assets based on their perceived riskiness. The danger weights under BASEL 1 were divided into five categories, starting from zero for presidency securities to 100% for certain forms of loans and other dangerous assets. The minimum capital requirement of 8% meant that for each €100 of RWAs, banks had to carry at the least €8 of capital.

Regulatory Capital ≥ 8%× Risk-Weights × Asset Values

The danger weight categories included:

- 0% risk weight for fully secured home loans, government bonds and money.
- 20% risk weight for claims on OECD banks and certain multilateral development banks (loans to other banks & public sector corporations in these OECD — Organization for Economic Co-operation and Development
- 50% risk weight for claims on non-OECD banks, securities firms and certain other financial institutions (home loans that aren’t fully secured — not covered 100% by collateral).
- 100% risk weight for many corporate loans, mortgages, and claims on non-financial institutions.
- 200% risk weight for certain exposures like derivatives, bridge loans, and non-traded assets.

BASEL 1 also introduced the concept of Tier 1 and Tier 2 capital, with Tier 1 capital consisting of the very best quality capital it might probably hold namely equity capital (strange shares) and disclosed reserves, and Tier 2 capital consisting of subordinated debt (a kind of debt that ranks below other debts within the event of bankruptcy or liquidation), loan-loss reserves, and certain forms of hybrid capital. Banks were now monitored on their CAR (which represents a bank’s ability to fulfill all of its financial obligations, not only absorb losses) in addition to their Common Equity Tier 1 (CET1) ratios (a measure of a bank’s core capital in comparison with its RWAs — which represents a bank’s ability to soak up losses and proceed operating without having external funding).

CAR = (Tier 1 + Tier 2) / Risk-Weighted Assets

CET1 = Tier 1 / Risk-Weighted Assets

: The second pillar requires banks to have a process for assessing their overall risk profile and determining whether their capital levels are adequate. Regulators (similar to the ECB or SARB) are chargeable for supervising this process and ensuring that banks have sufficient capital to cover their risks. All quantitative models built under Pillar 1 must be reviewed by overseeing supervisors. : The third pillar goals to advertise market discipline by requiring banks to reveal details about their risk profile, capital adequacy, and risk management practices to the general public. The thought is that market participants will use this information to make more informed investment decisions and put pressure on banks to administer their risks effectively.

Overall, the BASEL 1 framework laid the muse for contemporary RC (to not be mistaken for *economic capital* which is the most effective estimate of required capital that financial institutions use internally to administer their very own risk and to allocate the fee of maintaining RC amongst different units throughout the organization) requirements and helped improve the soundness of the worldwide banking system. Nonetheless, it was criticized for its simplicity and for not making an allowance for the various degrees of risk inside different asset classes (and collateral). Operational Risk & Market Risk was also not taken into consideration. In consequence, the Basel Committee continued to refine the framework, resulting in the introduction of Basel 2 and Basel 3.

2. BASEL II:

This accord was issued in 2004 and introduced a more risk-sensitive approach to capital requirements. BASEL II used a set of formulas to find out the quantity of capital banks needed to carry based on the risks they were taking over (formulas below). The identical rule for 8% of minimum capital requirement hold, nevertheless a minimum of 4% of CET1 capital was now imposed. It also introduced a recent framework for assessing credit risk, operational risk, and market risk. Under BASEL II, banks are required to make use of internal models to calculate their capital requirements for credit risk, subject to supervisory approval. Smaller banks were supplied with simpler formulas to compute the minimum amount of capital they need to hold to safeguard against risk, often known as the “.” Meanwhile, larger banks were offered the choice of using the “” approach (with the Vasicek distribution as the elemental description of credit losses and the measurement of credit risk) which enabled them to plot their very own models to find out their risk capital. That is where PD, Loss Given Default (LGD) and Exposure at Default (EAD) calculations/models became mandatory under this approach. Banks used these models to calculate how much the it stands to lose when borrowers default (Loss = PD x LGD x EAD). A further approach was also introduced because the the “” approach, where the bank uses its own internal rankings to find out risk weights for various kinds of assets. Nonetheless, a few of the model parameters are still determined by the regulators, and it is taken into account less sophisticated than the AIRB Approach. Once the PD, LGD, and EAD are known, risk-weight functions provided within the BASEL accord could be used to calculate the RC. We are going to discuss these approaches in-depth later.

Overall, BASEL II was designed to be more flexible and risk-sensitive than its predecessor, BASEL I. Nonetheless, it has also been criticized for being too complex and allowing banks to make use of internal models to govern their capital ratios. In consequence, the BCBS has continued to refine and update the BASEL framework over time.

3. BASEL III:

This accord was issued in 2010 in response to the worldwide financial crisis of 2008. BASEL III focused on strengthening bank capital requirements and introducing recent liquidity and leverage ratios to scale back the danger of bank failures. It also placed a greater emphasis on stress testing.

A number of the key differences between the BASEL accords include the danger weights assigned to various kinds of assets, the methods used to calculate capital requirements (Tier 1 capital increased from 4% to six%), and the emphasis on liquidity and leverage ratios. Moreover, each subsequent accord has built on the previous one to supply a more comprehensive framework for regulating the banking industry.

4. BASEL IV:

This accord further strengthen the regulation, supervision, and risk management practices of the banking industry. The reforms aim to deal with issues and weaknesses identified within the previous BASEL frameworks, particularly on the subject of the calculation of RWAs and using internal models.

A number of the key features of BASEL IV include:

- Output Floor: The introduction of an output floor for RWA, which suggests that banks using internal models will probably be required to carry a minimum level of capital no matter the danger weightings produced by their models. Banks capital requirements will probably be floored to a certain percentage of the standardized requirement, i.e. from 50% to 72.5%.
- Credit Risk: Changes to the calculation of credit risk RWA, including the removal of certain IRB approaches and the introduction of more granular risk weightings for exposures to small and medium-sized enterprises (SMEs).

The implementation of BASEL IV is ongoing, with various jurisdictions expected to adopt the reforms at different times. The reforms are expected to significantly impact the banking industry and require banks to carry higher levels of capital. It will proceed to enhance comparability amongst capital levels of banks.

The Credit Loss Distribution is a statistical representation of the potential losses that a financial institution may incur as a result of credit risk. It’s used to estimate the EL and UL of a credit portfolio.

The credit loss distribution relies on the probability distribution of credit losses, which could be modelled using various techniques, similar to the Normal distribution, Poisson distribution or binomial distribution, depending on the character of the credit risk.

The EL is the common amount of loss (which are incurred when obligors fail to pay or default) that is anticipated to be incurred over a certain time period based on historical data and probability distributions. It’s calculated because the product of the PD, LGD and EAD. That is the common/mean loss that must be provisioned for. Banks and corporations put aside provisions for such losses. Nonetheless, these values interrelate and merge in intricate manners, resulting in unexpected ULs. The CAR doesn’t account for ELs on bank runs. Bank runs occur when a lot of depositors withdraw their money from a bank as a result of concerns concerning the bank’s solvency or liquidity. Bank runs can result in liquidity shortages and may potentially cause a bank to fail. This means that a bank may have to carry additional capital to mitigate the danger of losses from bank runs.

The UL, alternatively, is the potential loss that may occur beyond the expected loss. It relies on extreme scenarios and is calculated because the difference between the EL and the worst-case loss that may occur within the tail of the distribution. Here the difference is between the Credit Value at Risk (CVaR) and the ELs = dispersion of ELs (it’s the capital of a bank should cover losses that exceed provisions — and could be thought to be a part of economic capital). EL and UL changes constantly as a result of macroeconomic aspects and portfolio sizes. That is where the BASEL equations play an important role in capital management for banks. Based on Basel II, the credit risk capital charge should aim to cover ULs and account for rare events, specifically on the 99.ninth percentile. The RC is used to cover the UL. The sum of the provisions and the RC should equal the 99.9% loss. If banks don’t insure themselves, they could be vulnerable to losses beyond this percentile.

By analyzing the credit loss distribution, banks can determine the capital required to cover potential losses from credit risk, in addition to the general risk of the credit portfolio. It’s a very important tool for risk management and regulatory compliance, particularly under Basel II, III and IFRS 9.

## Exposure at Default (EAD)

This refers back to the calculation of a bank’s potential exposure to a counterparty within the event of the counterparty’s default. It’s measured in currency and is estimated for a period of 1 yr or until maturity, whichever comes first. The EAD for loan commitments (the expected outstanding balance if the power defaults, which is the same as the expected utilization plus a percentage of the unused commitments including unpaid fees and accrued interest) relies on BASEL Guidelines and measures the portion of the power that’s more likely to be drawn within the event of a default. BASEL II requires banks to supply an estimate of the exposure amount for every transaction of their internal systems. The aim of those estimates is to completely capture the risks related to an underlying exposure.

Example: Let’s say a bank extends a €10000 credit line to a customer. The shopper uses €5000 of the credit line and pays off €2000, leaving a current outstanding balance of €3000. The bank estimates that if the shopper were to default at this point, they’d likely draw down the whole remaining balance of €3000. Subsequently, the EAD in this instance could be €3000. This represents the quantity that the bank could be exposed to if the shopper were to default.

Typically, EAD is a linear formula and is calculated per loan product. There are other ways to estimate EAD (for instance through conversion aspects) and we are going to discuss these later.

## Loss Given Default (LGD)

The EBA defines LGD because the proportion of an exposure that shouldn’t be expected to be recovered within the event of default. It’s calculated because the difference between the EAD and the quantity recovered through collateral or other means, expressed as a percentage of the EAD (loss is defined because the difference between the observed exposure at default and the sum of all of the discounted cashflows where loss rate = observed loss as a percentage of observed EAD).

Example: Suppose a borrower takes out a €10000 loan from a bank. If the borrower defaults on the loan, the bank will attempt to get well the quantity owed by selling any collateral (e.g., property) put up against the loan. Suppose the collateral is sold for €8000. Then the LGD could be:

LGD = (Total amount of loan — Amount recovered) / Total amount of loan LGD = (€10000 — €8000) / €10000 LGD = 0.2 or 20%

Which means that within the event of default, the bank would lose 20% of the entire amount of the loan, or $2000 on this case.

Normally after default occurs a client can either cure (defaulter paying off all outstanding debt), Restructuring (loan characteristics and terms are modified and modified), Liquidation/Foreclosure (banks repossess the collateral).

Typically LGD is calculated per loan product and involves statistical estimation using either regression (Linear, Logistic, Survival Models, etc) or historical averages. As per BASEL II guidelines, it’s endorsed that banks and other financial institutions calculate the Downturn LGD (loss given default during a downturn within the business cycle) for regulatory purposes. This helps to reflect the potential losses that will occur during an economic downturn. We are going to look into different definitions of LGD, Loss Rates (implied historical loss rates and workout loss rates) and statistical estimation later.

## PD & The Definition of Default

Lending money to each retail and non-retail (e.g. corporate) customers is a primary function of banks. To make sure responsible lending practices, banks should have effective systems in place to find out who’s eligible for credit. Credit scoring is a critical risk assessment technique used to investigate and quantify the credit risk of a possible borrower. The target of credit scoring is to measure the probability that a borrower will repay their debt (binary final result of default or non-default — moreover cure models will also be built individually which is the probability that the client will cure or payoff the loan which can also be binary). The results of this process is a rating that reflects the creditworthiness of the borrower (Logit models are highly regarded in scorecard development). To make sure there are consistencies across banks on what’s thought to be a default, regulation has set tight measures on this.

The EBA defines default because the occurrence of a number of of the next events:

- When a bank considers that the obligor is unlikely to pay its credit obligations in full, without recourse by the bank to actions similar to realizing security (i.e., the obligor is “unlikely to pay” criterion).
- When the obligor is late greater than 90 days on any material credit obligation to the bank (i.e., the “90 days late” criterion).
- When the bank has reason to imagine that the obligor has entered into chapter 11 or other financial reorganization (i.e., the “bankruptcy” criterion).

The EBA requires banks to make use of the above criteria to discover and report defaulted exposures of their portfolios, and to make sure that they’ve adequate policies, procedures, and systems in place to accurately discover and report such exposures. We obtain (through External rankings — provided by rating agencies, but some regulators prohibit their usage, for instance, S&P, Fitch or Moody’s rankings for companies or governments), estimate or interpolate (through using Scorecards, Logit or Probit models) the PD of a specific rating/grade/pool because the long-run average of the one-year default rates.

There are a lot of interesting discussions around PDs similar to Stressed and Unstressed PDs; Through-the-cycle (TTC) and Point-in-Time (PIT); Estimation (either through traditional statistics or ML). We are going to take a look at these topics in-depth later after we discuss the model lifecycle.

The Asymptotic Single Risk Factor (ASRF) framework is a credit risk model used to calculate RC requirements for credit risk under the BASEL II framework. It assumes that the portfolio’s risk could be represented by a single systematic factor, often known as the “factor model” and uses this factor to estimate the portfolio’s risk. When describing default events, it is often assumed that a single factor, which is the state of the world’s economy, is tied to all loan values and obligor’s default probabilities in a straightforward manner (by the correlation between). This approach allows for the calculation of the RC charge for credit risk to be made using a single-factor model, which is way simpler than attempting to model each individual exposure in a portfolio. The ASRF approach is especially useful for banks with large and diverse portfolios, because it allows them to estimate RC requirements with reasonable accuracy while minimizing the computational burden.

The ASFR formulas involve calculating the EL and UL of a portfolio of exposures, after which applying a capital charge based on the UL. The formulas consider the correlation between obligor defaults and the systematic factor (normally GDP), in addition to the variability of the factor over time. The ASFR approach is taken into account a simplified version of the more AIRB approach.

This approach relies on the asset value factor model and has its origin in Merton’s structural approach (1974). Models based on asset value propose that the probability of a firm’s default or survival is contingent on the worth of its assets at a selected risk measurement horizon, typically at the top of that horizon. If the worth of its assets falls below a critical threshold, its default point, the firm defaults, otherwise it survives. The origins of asset value models could be traced back to Merton’s influential paper published in 1974. Vasicek (2002) adapted Merton’s single asset model which could be used to model a credit portfolio, by making a function that transforms unconditional default probabilities into default probabilities tied to a single systematic risk factor. The AIRB approach employs an analytical approximation of CVaR using the Vasicek distribution.

The derivation for the logic follows as:

Banks assess each outstanding loan on a person basis to find out the associated risk, including the likelihood of default by the obligor. For retail loans similar to mortgages, aspects like income, age, residence, loan nature, and macroeconomic indicators like house prices and unemployment rates all of them play a job in determining the danger of a loan. Nonetheless, since loan defaults are correlated with each other, the BASEL capital requirements mandate that capital have to be calculated for a bucket of loans with similar characteristics. To find out the input parameters for capital requirements (PD, LGD, and EAD), they need to represent the whole bucket. The PD for the whole bucket is the common of all individual PDs. The methodology described above within the image could be prolonged to find out the default fraction distribution of a credit portfolio, which is a bucket of loans. Nonetheless, this requires a very important assumption that the credit portfolio is infinitely granular, meaning that it incorporates an infinite variety of loans and no individual loan represents a significant slice of the entire portfolio exposure. In such a portfolio, there isn’t any idiosyncratic risk since all idiosyncratic risk is diversified. Now consider the next.

The ultimate equation could be interpreted because the default fraction within the infinitely granular portfolio, conditional on y. The BASEL framework (and IFRS9 TTC PDs to PIT PDs which we are going to discuss later) builds upon these equations where credit losses and measurement are described using the Vasicek one-factor distribution. This model assumes the presence of a single risk factor, typically measured by GDP. Moreover, all loans are linked to this single risk factor through a single correlation value. Longer-term loans are considered riskier than short-term loans, and the correlation value varies with the PD. Nonetheless, the LGD shouldn’t be correlated with the PD.

For a defaulted exposure/late exposure, that’s where PD=1 or 100% within the above RWA formula, the capital requirement (K) is the same as the greater of zero and the difference between its LGD and the bank’s best estimate of expected loss (BEEL). The RWA formula stays the identical.

I actually have focused so much on the theoretical points here and I would really like the reader to notice that to construct or validate credit models effectively, one can’t necessarily skip these basics because the banking credit model frameworks are built around these definitions & formulas. After all, assuming the bank follows the IRB approach — for SA approach it’s far more simple within the sense that the regulators provide the formulas to make use of directly. Within the case of AIRB — banks estimate PDs, EADs, LGDs through the use of varied statistical (and ML models).

The series will focus so much on different modelling and validation techniques through quantitative coding perspectives but I’ll refer back to this post for the overall basics. In the next posts we are going to take a look at rating philosophies, IFRS9, use cases of AIRB models in a bank (stress testing, etc), statistical modelling (Logistic Regression & ML models) and statistical validation techniques (discriminatory power, calibration, stability — where there will probably be so much more coding involved).

Thanks for reading! I hope you found this text interesting and helpful (please be happy to share with me). I’m all the time attempting to improve my writing and knowledge sharing. For those who know of any tools or suggestions please let me know within the comments.

For now, I don’t have any newsletter or lively mail list so best is so as to add me on Here I’ll let you recognize if I actually have any recent posts. Otherwise, best is to provide me a follow on Medium directly.

For those who enjoy reading articles like this and need unlimited access to my articles and all those supplied by Medium, consider signing up using my referral link below.

https://medium.com/@willempretorius/subscribe

You can too buy me a coffee in the event you’d like :).

https://www.buymeacoffee.com/wlpretorius

*Disclaimer: The views expressed in this text are my very own and don’t represent a strict outlook or the view of any corporation.*

Your article helped me a lot, is there any more related content? Thanks! https://accounts.binance.com/cs/register-person?ref=V2H9AFPY

Thanks for sharing. I read many of your blog posts, cool, your blog is very good.

Hello There. I found your weblog the usage of msn. This is a really

well written article. I’ll make sute to bookmark it

and come back to learn more of your hhelpful info. Thank yyou for the post.

I’ll certainly comeback. https://Www.Waste-Ndc.pro/community/profile/tressa79906983/