The Consumer Federation of America (CFA) and Consumer Reports (CR) requested clear and definite regulatory guidance from Consumer Financial Protection Bureau (CFPB) Director Rohit Chopra last week on the requirement for financial institutions to actively seek and deploy less discriminatory algorithms (LDAs) in credit underwriting and pricing.
“In recent years, the adoption of algorithmic decision-making tools by financial institutions, particularly more complex machine learning (ML) models, has surged. While these advancements have the potential
to enhance efficiency and advance financial inclusion, there is growing evidence that they can also perpetuate and exacerbate existing and historical biases, leading to discriminatory outcomes that
adversely affect marginalized and underserved communities,” the consumer groups said in a letter.
"When financial institutions neglect to seek out and apply LDAs, they erode consumer trust and violate fundamental principles of fairness and equity that underpin our financial system and wider society," the letter further mentioned.
In certain cases, these failures may even constitute legal violations, Jennifer Chien, senior policy counsel for financial fairness at Consumer Reports, and Adam Rust, director of financial services at the Consumer Federation of America, authors of the letters, pointed out.
They requested the CFPB to issue clear guidance on how lenders should seek out and adopt less discriminatory alternatives in algorithmic credit underwriting and pricing. The requests also emphasized that such guidance is essential to complement the agency's supervisory and enforcement efforts.
Chopra has previously publicly spoken about his skepticism on the use of AI. At an Axios event in Washington, D.C. last year, he voiced concerns about the increasing prevalence of generative AI technology.
He warned that AI has the potential to concentrate significant power in the hands of a few companies and their executives. Chopra also expressed apprehension regarding AI's capability to replicate human interactions, cautioning that this ability could be exploited for fraudulent activities and other abuses.