/Back & What do we do.


What do we do.

As it turns out this blog is read by 2000 people - the internet is an interesting place - the most read blogs are about my private life and technical reads. So I’ll focus more on this.

This time I wanted to dive in a bit more on what we are doing and what has the interested from a few major financial institutions. Next time is going to be more about the trip, Christmas and well a few important dates in December..

So this time a bit more technical.

The credit System is broken, its old serves its purpose but needs additional datasets.

VioScore™  for Loans

 

A loan distribution is a risky industry, but it is also one of the main revenue streams for the majority of banks.

 

Banks would rather not extend credit to clients that are unable to repay the borrowed funds. Nevertheless, despite the potential that banks tighten their lending policies – especially in 2022 with the new GDPR regulations - a certain proportion of credits will eventually turn into bad loans.

 

The percentage of the bad loans and the quality of the credit endorsement process can be efficiently assessed by analyzing the data on non-performing loans (NPLs).

 

The loan-granting procedure must be closely monitored, and banks must develop a strong credit risk management strategy.

 

In the majority of banking organizations, the department that evaluates loan applications uses a centralized process.

 

Using the banks' credit scoring model, introduction divides financing applications between risky and non-risky customers. The objective of this credit scoring procedure is to reduce the risk of loan losses and default rate due to the expensive misclassifying error. It will determine who should be awarded credit and how much credit should be granted.

Therefore, the risk generated increases in proportion to the size of the misclassifying error . Despite the extensive procedure and the potential for the bank's lending policy to tighten, after some time, a certain percentage will default.

 Unsuccessful loans will eventually replace a percentage of the credits. This raises the question of whether the credit scoring model was efficiently built, especially in light of the choice of pertinent factors/variables for the model and the weights allocated to those variables in the model.

The bank management must thus reassess the current credit rating algorithm that it uses to screen loan applications.

Is it necessary to keep using the current model?

A revision of the model's criteria may be necessary. Does the model's weighting of the various criteria make sense? Exist any other more efficient and straightforward methods that could be used? Are the current used models adopted widely enough to the life in the 20th century?

At VioScore™ we belief that the base of an alround risk profile should always be credit, but we feel there is a need for a more rounded view of the consumer.

There are several ways to asses credit

Discriminant Analysis

Discriminant analysis is a variation of regression analysis used for classification. The label is based on categorical data. The simplest variation is a label with two categories (for example, “default” versus “nondefault”). The original dichotomous linear discriminant analysis was developed by Sir Ronald Fisher in 1936 (Fisher 1936).

In default prediction, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy, based on accounting ratios and other financial variables. Edward Altman’s 1968 model is still a leading model in practical applications (Altman 1968).

The original Altman Z-score model, developed using data of publicly held manufacturers, was as follows:

Z = 1.2X1 + 1.4X2 + 3.3X3 + 0.6X4 + 1.0X5

where

X = working capital / total assets 1

X2 = retained earnings / total assets

X3 = earnings before interest and taxes / total assets

X = market value of equity / book value of total 4

liabilities
X5 = sales / total assets

Probit Analysis and Logistic Regression

For the dichotomous label in credit scoring, there have been several efforts to adapt linear regression methods to domains where the output is a probability value instead of any infinite real number. Many of such efforts focused on mapping the binary range to an infinite scale and then applying the linear regression on these transformed values

The logit model is a popular model for estimating the probability of default, because it is easy to develop, validate, calibrate, and interpret. Rather than choosing parameters that minimize the sum of squared errors (as in ordinary regression), estimation in logistic regression chooses parameters that maximize the likelihood of observing the sample values (email us for a spec).

Judgment-Based Models

Multiple methods may be employed to derive expert judgment-based models. One such a method is called the analytic hierarchy process (AHP), which is a structured process for organizing and analyzing complex decisions. The AHP model is based on the principle that when a decision is required on a given matter, consideration is given to information and factors, which can be represented as an information hierarchy. The decision makers decompose their decision problem into a hierarchical structure of more easily comprehended subproblems, each of which can then be independently analyzed. The key element of the AHP is that human judgments, not only the underlying information, be used to perform the evaluations. Human judgment is particularly critical in evaluating exceptions and instances that do not have precedence or are significantly underrepresented in the data.

For example, Bana e Costa, Barroso, and Soares (2002) developed a qualitative credit scoring model for business loans based on concepts of the AHP.

What is the difference with a traditional credit score and VioScore™ credit score+ 

If you are interested drop us a line at info@mylifekit.io and we are happy to help.

Be safe.

Previous
Previous

/Clients and the next chapter of my life

Next
Next

/San Fran Time p2