Rivas Estrada, José
[UCL]
Vrins, Frédéric
[UCL]
Financial institutions have increasingly turned their attention into developing coherent and relatively robust measures for assessing their exposures arising from engaging into lending operations. Quantitative risk assessment focusing on credit exposure frameworks have been progressively proposed both at an internal level and from a regulatory angle. The former are typically known as Internal rating-based (IRA) techniques which are at the center of prudential regimes such as Basel III that tries to settle ground for a harmonized risk management body across banking industry. Three variables tie altogether the credit risk assessment by factoring separately an exposure level under a self-contained manner. (Crouhy, Galai, & Mark, 2000) First, the probability of default (PD) is a measure of how likely the counterparty is to go into default once the loan contract has been emitted. The higher is the probability of default for a loan, the more potential it has to undermine the company’s solvency in case of delinquency. Next, the exposure at default (EAD) quantifies the actual amount of money that is at risk at a certain point in time. All cumulative repayments and collaterals are thus considered in cumulative manner as its ultimate sum decreases the lender’s final net position. EAD then is expected to decrease as the contract evolves. In the same manner, a high collateral would translate into a lesser EAD by serving as guarantee in case liquidation is necessary for supporting any future financial struggle coming from the loan subscriber. Finally, loss given default (LGD) measures the expected financial loss that will be supported by the lender once the debtor fails to repay its debt. LGD provides an estimate to the amount of money lender expects to lose once default has been observed. By definition the lender’s potential is bounded by the total amount of the loan. Hence, LGD as a proportion of the credit is bound by the unit interval [0,1]. Yet in practice it’s common to observe negative cashflows arising from collecting costs such as recovery legal expenses. The opposite is also likely to be observed with income from collateral liquidation potentially being higher than the outstanding loan. Thus, creating a negative loss rate for the emitting bank. These kind of cashflow get factored in and have an important weight when estimating the ultimate recovery forecast from the debtor. The current paper explores LGD estimation techniques through statistical learning methods. The goal is therefore to produce LGD predictions using a regression approach. While PD research literature is relatively abundant, LGD studies are relatively scarce. The following will build up from what is currently the standard approach to LGD, linear regression, and progressively try to develop a more flexible framework into regression. Typically, banks gather an enormous volume of data on its credit operations. Under these circumstances and with limited computational capacity, dimensionality quickly becomes an issue when hundreds of variables are explicitly mapped to our target variable, LGD. Namely the current work proposes the use of kernel methods leveraging an elegant way to avoid the explicit feature space mapping that would turn out to be otherwise unfeasible from a computational perspective. Hence this work focuses mainly on proposing a tool for enhanced predictive performance.

Bibliographic reference |
Rivas Estrada, José. *Loss given default prediction through Kernel methods.* Faculté des sciences, Université catholique de Louvain, 2023. Prom. : Vrins, Frédéric. |

Permanent URL |
http://hdl.handle.net/2078.1/thesis:38531 |