Using AI to make important decisions about individuals carries a risk of bias, especially for underwriting, credit, employment, and educational admission decisions. In this Debevoise Data Blog post, we discuss how a recent settlement by the Massachusetts Attorney General’s Office highlights the risks that can arise in AI-powered lending decisions and ways to reduce those risks.
AI Bias and Group Proxy Discrimination
There are several ways that the use of AI (as well as non-AI algorithms) can result in unlawful bias or discrimination. One common form of AI-decision bias is referred to as “proxy discrimination,” where variables that influence a decision are not explicitly protected characteristics (e.g., they are not race, gender, or ethnicity) but are highly correlated with them. For example, suppose (this is not true, but is just being used for explanatory purposes) that Asian men smoke cigarettes and drink alcohol at rates significantly above the average for life insurance applicants, and therefore, a life insurer wants to screen out Asian men from an accelerated underwriting program that does not involve a medical exam.
It would clearly be discriminatory to deny coverage to Asian men. Suppose instead that the insurer denied coverage to applicants who log into the insurer’s website using a gaming computer optimized for translating between Asian languages and English, assuming that those applicants are much more likely to be Asian males. The use of that kind of computer is not independently predictive for those particular applicants because it is not causally related to the risk. Rather, the predictive power of that variable comes from its strong correlation with a protected class that, as a group, has an elevated risk over the average applicant—it serves as a “proxy” for being an Asian male, and therefore, its use likely amounts to proxy discrimination.
For an Asian man who does not smoke or drink, the use of this variable would be discriminatory because he would be denied coverage due to his association with a demographic group that has a higher-than-average risk profile (i.e., smoking and drinking) even though he himself does not have that particular risk factor.
The Massachusetts AG Settlement
On July 10, 2025, the Massachusetts Attorney General’s Office announced a $2.5 million settlement (the “Settlement”) with loan provider Earnest Operations LLC to resolve allegations of unlawful discrimination in student loan decisions. The case involved several claims of bias (which Earnest continues to deny), including:
- Automatic denials of loans for non-citizen applicants who did not have a green card;
- Consideration of the applicant’s cohort default rate (e., the average default rate for applicants from the applicant’s college) when refinancing a student loan;
- Failure to assess variables for bias or test the models for disparate impact; and
- Inadequate information provided to applicants as to why their loan applications were denied.
To better understand the concerns of insurance and financial regulators over AI bias and ways to address those concerns, it is helpful to focus on the cohort default rate (“CDR”) claim.
The CDR Claim and Proxy Discrimination
CDR is produced by the U.S. Department of Education and describes the average rate of loan defaults associated with specific higher education institutions. The Massachusetts AG alleged that Earnest’s use of CDR in its underwriting model resulted in a disparate impact in approval rates and that loan terms with Black and Hispanic applicants were more likely to be penalized than White applicants. Although the specific reasoning of the AG is not provided in the Settlement, presumably the argument goes as follows:
- Certain Historically Black Colleges and Universities (HBCUs) and other schools that enroll larger shares of low-income Black or Hispanic students have a higher-than-average CDR.
- Because the student loan refinancing model penalizes every applicant from high-CDR schools, it disproportionately negatively impacts Black and Hispanic borrowers—especially those with strong credit scores and high incomes.
- The result is discriminatory because a borrower with pristine credit but an alma mater with a high CDR is penalized for the average behavior of past students, which likely has no direct connection to their own default risk.
Practical Takeaways
- Proxies are often unintentional. It was not necessary for the AG to prove that the insurer was intentionally trying to discriminate against Black or Hispanic candidates. The use of proxy variables without proper bias assessments is enough to draw scrutiny and result in legal and reputational harm. This is especially true where, like here, the variable is not something that the applicant can change or improve upon—the college they attended that is associated with the loan is a permanent variable (as opposed to credit score, income, or savings).
- Aggregate or group variables should be carefully examined. Inputs like CDR and zip code can be proxies for protected classes and unconnected to the risks associated with certain applicants. Therefore, they should be assessed to ensure that they have a causal connection to the risk for each individual applicant (e.g., the zip code location is close to a flood zone for property insurance).
The authors would like to thank Debevoise Summer Law Clerk Teddy Leeds Armstrong for his work on this Debevoise Data Blog.
To subscribe to the Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them fast track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.
The cover art used in this blog post was generated by ChatGPT 4o, and content was partially generated by o3.