Standard Chartered has been developing new underwriting models using machine learning that could help it determine creditworthiness of borrowers with minimal credit histories.
The ultimate goal: to increase approval rates and eliminate bias by gender, race, age and other characteristics.
The international bank has been incorporating artificial intelligence and machine learning into its business since 2016, to better engage with customers and recommend suitable credit cards and loans. In 2019, it identified another need: making more inclusive and unbiased credit decisions, especially for its burgeoning population of young customers leaving university and entering the workforce, and ensuring its machine-learning models were transparent and explainable.
Standard Chartered, which is based in London, worked with its SC Ventures unit, which identifies emerging technology and startups, to identify companies that could help. In August it announced its partnership with Truera, a model intelligence platform in Redwood City, Calif., that analyzes machine learning and improves models.
Previously, Standard Chartered based all its lending decisions on credit bureau data and traditional modeling techniques such as logistic regression, a type of statistical analysis that helps predict the likelihood of something happening.
The bank has been testing its new models in one country over the past 12 months. Sam Kumar, group head of analytics and data management at Standard Chartered, declined to share specific numbers, but notes that default levels have been similar to what they were in the past and the bank has begun approving some clients who would have historically been rejected.
“Broadly, we’ve seen that risk degradation is not substantially different to what our traditional approach has been,” Kumar said. “It shows the opportunity for a wider set of inclusions moving forward.”
The challenges Standard Chartered faces in its efforts to make more inclusive credit decisions are shared by financial institutions in the U.S.
According to the Federal Reserve’s Report on the Economic Well-Being of U.S. Households in 2019, 41% of adults applied for some type of credit. Of those, 24% were denied at least once in the year before the survey and 31% were either denied or offered less credit than they requested. Lower-income individuals were substantially more likely to experience adverse outcomes with their applications than those with higher incomes, as were Black and Hispanic individuals at all income levels.
In terms of young customers specifically, a Bankrate poll from November showed that 32% of millennials (ages 24 to 39) were rejected for credit in the U.S. this year, compared with 22% of Generation X consumers (ages 40 to 55) and 11% of baby boomers (ages 56 to 74).
Credit bureau data is much thinner for these customers compared to older, more established customers, and basing decisions on the same formulas would result in more rejections.
Kumar says that the bank does not take gender, marital status, religion or race into account when evaluating customers for credit, but other pieces of data, such as buying patterns, can still correlate with a specific life stage or gender. He says Standard Chartered has always tried to identify bias and correlate these instances back to variables like gender and race.
Truera, which came out of stealth last August, says its software removes the “black box” surrounding machine learning, allowing data scientists, risk teams and more to understand how models work and how inputs will affect outputs.
Truera, “allows us to dive deeper,” Kumar said. “It gives us transparency into the fact that we can explain which data items have a material contribution to a decision and ensure we’re not reintroducing an indirect or derived bias even though we excluded the data items typically considered for bias.”
Besides Standard Chartered, the company works with banks, insurance companies and fintechs in Asia-Pacific countries, the United Kingdom and the U.S. Its AI Explainability technology, or its ability to make clear how model inputs affect model outputs, is based on research from Carnegie Mellon University.
For Standard Chartered, its work with Truera is one piece in a multipart strategy to lend to customers with minimal credit histories. (In parallel, the bank is also turning to nontraditional data when scoring customers.) In several countries where it does business, including India, Indonesia, Malaysia and Vietnam, there are large swaths of customers in their early 20s who are leaving university, entering the workforce and applying for their first credit cards or personal loans.
“It’s important for us to ensure we have the best way for financial inclusion while making the right decisions in terms of how we take on risk for the bank,” Kumar said. “How do we become better at predicting risk and improving financial inclusion for clients that would not be approved the old way?”
Standard Chartered considered three vendors and conducted proofs of concept over six to eight months. Kumar says the bank chose Truera because its values regarding ensuring fair decisions aligned and the company understood regulations governing transparency in AI. The bank was also impressed by the academic research underlying Truera’s technology.
“We can compare how a model treats different people, calculate whether it’s treating people differently and explain why it might be doing that,” said Will Uppington, a co-founder of Truera and its CEO. “The model may accept women at a lower rate than men. If it does, you need to understand why. Then you can say, is that justified or not justified?”
Correcting bias in credit decisions is not a one-time process, said said Rayid Ghani, professor of machine learning at Carnegie Mellon University’s Heinz College.
“If you’ve never given credit to certain types of people before and build a machine learning system, it won’t tell you these people will pay you back,” he said. “Or if the machine-learning system you build is designed to get predictions for as many people as possible and your customer base is 90% male or 80% white people, then the system is being evaluated against how well it does with historical data.”
Source: American Banker