AI has a discrimination problem. In banking, that can be devastating

AI algorithms are increasingly being used in financial services, but they also pose some serious discrimination risks.

Sadiq Demiros | CD | Getty Images

AMSTERDAM – Artificial intelligence has a problem with racial bias.

From biometric systems that misidentify black and minority faces to the application of speech recognition software that fails to distinguish accents with distinct regional accents, AI still has a lot to do when it comes to discrimination.

The problem of amplifying existing biases can be even more acute when it comes to banking and financial services.

Deloitte pointed out AI systems are ultimately only as good as the data they are given: incomplete or unrepresentative datasets can limit the objectivity of AI, and biases in the development teams that train such systems can perpetuate this. Bias cycle.

AI can be stupid

Nabil Manji, head of cryptocurrency and Web3 at FIS Worldpay, said the key thing to understand about AI products is that the strength of the technology depends heavily on the source material used to train it.

“There are two variables in how good an AI product is,” Manji told CNBC. “One is the data it has access to, and the other is how good the large language models are. That’s why when it comes to data, you look at to companies like Reddit and others who have Say publicly that we will not allow companies to steal our data, you have to pay us for it. “

As for financial services, Manji said that many back-end data systems are fragmented in different languages ​​and formats.

“None of these have been consolidated or coordinated,” he added. “This will result in AI-driven products that are much less effective in financial services than in other verticals or other companies with uniform and more modern systems or access to data.”

Europe, US work on voluntary AI code of conduct

smaller It is suggested that blockchain or distributed ledger technology can be used as a way to gain a clearer view of the disparate data hidden in the messy systems of traditional banks.

However, banks, being heavily regulated and slow-moving institutions, are unlikely to adopt new AI tools at the same pace as their more nimble tech counterparts, he added.

“You have Microsoft and GoogleOver the past year or two, they have been seen as the ones driving innovation. They can’t keep up with this speed. Then you think about financial services. Banks are not known for being fast,” Manji said.

AI problems in banking

Rumman Chowdhury, former head of machine learning ethics, transparency and accountability at Twitter, said the loans are a prime example of how AI systems can rear their biases against marginalized communities.

“Algorithmic discrimination is actually very visible in the lending space,” Choudhry said at the Money20/20 conference in Amsterdam. “Chicago has a history of denying these (loans) to predominantly black communities.”

In the 1930s, Chicago became known for the discriminatory practice of “redlining,” in which the creditworthiness of a property was largely determined by the racial demographics of a particular neighborhood.

“There’s going to be a giant map on the wall in all of Chicago and they’re going to redline all the predominately African-American districts and not give them loans,” she added.

“Fast forward a few decades and you’re developing algorithms to determine risk across regions and individuals. While you might not include a data point on someone’s race, it’s implicitly captured.”

In fact, Angle Bush, the founder of Black Women in AI, an organization that aims to empower black women in artificial intelligence, told CNBC that when AI systems are used exclusively for loan approval decisions, she sees a risk of replicating biases present in the historical data used to train the algorithms.

“This could lead to the automatic denial of loans for individuals in marginalized communities, exacerbating racial or gender disparities,” Bush added.

“Banks must acknowledge that adopting AI as a solution may inadvertently perpetuate discrimination,” she said.

Frost Li, a developer who has worked on artificial intelligence and machine learning for more than a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.

“What’s interesting about AI is how we choose the ‘core features’ to train on,” said Lee, who founded and runs Loup, which helps online retailers integrate AI into their platforms. “Sometimes, we select features that have nothing to do with the outcome we want to predict.”

When AI is applied to banking, it can be difficult to identify the “culprit” of the bias because everything in the calculations is so complex, Lee said.

“A good example is how many fintech start-ups are specifically targeting foreigners, because graduates from the University of Tokyo can’t get any credit cards even if they work at Google; but one can easily get a credit card from a community college credit union, Because the bankers know the local schools better,” Lee added.

Generative AI is not typically used to create credit scores or risk score consumers.

“That’s not what the tool was designed for,” said Niklas Guske, chief operating officer of Taktile, a startup that helps fintech companies automate decision-making.

Instead, Guske said, the most powerful application is preprocessing unstructured data, such as text files, for example to classify transactions.

“These signals can then be fed into more traditional underwriting models,” Guske said. “Thus, generative AI will improve the quality of the data underlying such decisions, rather than replace common scoring processes.”

Fintech Nium CEO says plans for US IPO within two years

Regulating Bias in AI

Choudhury said a global regulatory body like the United Nations is needed to address some of the risks of AI.

Although artificial intelligence has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the moral and ethical soundness of the technology. Industry insiders say they are most concerned about misinformation; racial and gender bias embedded in AI algorithms; and “illusions” created by tools like ChatGPT.

“I am very concerned that thanks to generative artificial intelligence we are entering a post-truth world where nothing we see online can be trusted – not any text, not any video, not any audio letter, but then how do we get the information? How do we ensure that the information has a high level of integrity?” Chowdhury said.

The time has come for meaningful regulation of AI — but knowing how long regulatory proposals such as the EU AI Bill will take to take effect, some fear it won’t happen soon.

“We call for greater transparency and accountability about algorithms and how they work, as well as lay statements that allow for the self-judgment of individuals who are not AI experts, test certification and release of results, independent complaints process, regular audits and reporting, participation in design and consideration Communities are racialized when technology is deployed,” Smoot said.

The AI ​​Act is the first regulatory framework of its kind, incorporating concepts such as fundamental rights approaches and remedies, Smoot said, adding that the regulations would be implemented in about two years.

“It would be great if that could be shortened to ensure transparency and accountability are at the heart of innovation,” he said.

BlackRock Reportedly Close to Submitting a Bitcoin ETF Application

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *