The policy statement from the CFPB, FTC, Department of Justice and Equal Employment Opportunity Commission highlights enforcement efforts to protect consumers from bias in automated systems and AI.
04/27/2023 2:25 P.M.
4 minute read
Several federal agencies have released a policy statement on the use of artificial intelligence products under existing laws, particularly to ensure that consumers aren’t discriminated against by the algorithms companies use for loans or other financial products.
“Today, several federal agencies are coming together to make one clear point: there is no exemption in our nation’s civil rights laws for new technologies that engage in unlawful discrimination,” said CFPB Director Rohit Chopra in prepared remarks. “Companies must take responsibility for their use of these tools. The Interagency Statement we are releasing today seeks to take an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.”
In addition to the CFPB, the Federal Trade Commission, Department of Justice Civil Rights Division and Equal Employment Opportunity Commission participated in the interagency statement (PDF).
“The statement highlights the all-of-government approach to enforce existing laws and work collaboratively on ‘AI’ risks,” Chopra said.
According to a news release from the CFPB, the agencies “have previously expressed concerns about potentially harmful uses of automated systems and resolved to vigorously enforce their collective authorities and to monitor the development and use of automated systems.”
For example, according to Chopra, research shows the risks from companies’ use of AI include that algorithms used for automated mortgage applications “found that Black families were 80 percent more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms.”
In response, the agencies said they are focused on regulating digital redlining, “which is redlining caused through bias present in lending or home valuation algorithms and other technology marketed as artificial intelligence.”
Other protections the agencies are focused on include reducing bias in home valuations, including appraisals using algorithms. There is guidance from the agencies on advertising practices for AI and complex algorithms on credit models.
In his remarks on the policy statement, Chopra said, “Generative AI, which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms—from consumer fraud to privacy to fair competition.”
After reading the statement, Heath Morgan, partner at Martin Lyons Watts Morgan PLLC, noted the agencies’ focus on enforcement and consumer protections may be misdirected.
“Director Chopra first focuses on potential harms of ‘Generative AI’ in his statement, but then the rest of the statement is only focused on fair competition which, at best, should be the third concern of regulators,” Morgan said.
In fact, Morgan added, “the number one concern for consumers is and will be Generative AI that can be used to produce voices, images, and videos that are designed to simulate real-life human interactions and that could impersonate them without their consent to commit fraud, identity theft, and reputation theft.”
Regulators should focus more on protecting consumers’ personal information when it comes to AI, according to Morgan.
“Right now, our current laws do not provide sufficient consumer protection to address these concerns especially with the creation of AI algorithms,” Morgan said. “If federal regulators are going to step in and do anything to protect consumers, it should absolutely be to protect consumer personal information including their name and likeness being used to create deepfake identities and avatars leading to consumer financial and personal harm.”
ACA member firm Barron & Newburger also provided a response to the agencies’ policy statement: “Our takeaway is that these regulators will not accept the excuses for anti-discrimination noncompliance such as ‘the machine learning tools just did it’ or ‘it was an automated process’ or ‘the AI ate my homework.’ The use of technology is clearly encouraged, but it will not justify a lack of compliance and management oversight.”
The CFPB said it “will continue to monitor the development and use of automated systems, including AI-marketed technology, and work closely with the Civil Rights Division of the DOJ, FTC, and EEOC to enforce federal consumer financial protection laws and to protect the rights of American consumers, regardless of whether legal violations occur through traditional means or advanced technologies.”
This spring, the CFPB will issue a white paper with an overview of the market using chatbots, limitations of the technology, use by financial institutions and impact on consumers’ ability to interact with those institutions.
“Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability,” Chopra said. “Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making.”
Related Content from ACA International:
Recording Available – ACA Huddle: Dark Patterns and Commercial Surveillance—What You Need to Know
CFPB Circular Outlines Enforcement of “Dark Patterns”
Dark Patterns: What You Need to Know
Remember, subscribe to ACA Daily and Member Alerts under your My ACA profile when logged in to acainternational.org to receive updates on the ACA Huddle.