September 01, 2022

Duckworth Leads Colleagues in Calling for a Federal Review on How AI Impacts Employment Opportunities for Americans with Disabilities

 

[WASHINGTON, DC] – Today, U.S. Senator Tammy Duckworth (D-IL) wrote to U.S. Chair of the Federal Trade Commission (FTC) Lina Khan, U.S. Chair of the Equal Employment Opportunity Commission (EEOC) Charlotte Burrows and U.S. Secretary of the Department of Labor (DoL) Martin Walsh to call for a thorough review of how artificial intelligence (AI), algorithms and algorithmic bias might play a role in discriminating against people with disabilities when it comes to employment opportunities and hiring practices. Joining Duckworth on this letter are U.S. Senators Elizabeth Warren (D-MA), Kirsten Gillibrand (D-NY), Michael Bennet (D-CO), Mazie K. Hirono (D-HI), Richard Blumenthal (D-CT) and Ed Markey (D-MA).

“We write to respectfully request that the Federal Trade Commission (FTC), the Equal Employment Opportunity Commission (EEOC) and the Department of Labor (DoL) undergo a thorough review of artificial intelligence (AI), algorithms and algorithmic bias, including but not limited to hiring and worker productivity practices conducted by both employers and employment agencies that involve AI, as well as how such practices may discriminate against people with disabilities,” the Senators wrote in the letter.

The Senators concluded: “We urge you to ensure disability is a fundamental component of any algorithmic bias review conducted, and that the FTC, EEOC and DoL, including the Office of Disability Employment Policy, work closely together, and with us, to advance and enforce policies that advance the employment and economic well-being of Americans with disabilities.”

A full copy of the letter is available here and below:

Dear Chair Khan, Chair Burrows and Secretary Walsh:

We write to respectfully request that the Federal Trade Commission (FTC), the Equal Employment Opportunity Commission (EEOC) and the Department of Labor (DoL) undergo a thorough review of artificial intelligence (AI), algorithms and algorithmic bias, including but not limited to hiring and worker productivity practices conducted by both employers and employment agencies that involve AI, as well as how such practices may discriminate against people with disabilities. It is essential that the administration establish a full public record on these issues and is prepared to reach full employment and equity for Americans with disabilities, as required under the Americans with Disabilities Act (ADA) and the Rehabilitation Act.

According to the Bureau of Labor Statistics, people with disabilities are much less likely to be employed and more likely to be underemployed than people without disabilities. In 2021, 19.1 percent of people with disabilities were employed, compared to 63.7 percent of people without a disability. Algorithmic bias, which often plays a role in exacerbating these disparities, results from unrepresentative or incomplete training data sets, flawed or inaccurate information or data that reflects historical inequities and existing prejudices, as well as dependence on algorithms that may not adequately measure a candidate’s fit with a job. If left unaddressed, this bias often leads to decisions that collectively and disparately affect certain populations. Both the intended and unintended consequences of algorithms and algorithmic bias are important to identify and resolve.

The data on the risks that algorithms, particularly those that rely on AI, pose to replicating and even amplifying human biases are well-cited, and the evidence continues to grow. However, to date, much of the work examining bias in AI hiring systems has focused on race and gender, finding that AI bias generally harms women, people of color, gender minorities and those at the intersections of these identities. Disability has been largely omitted from the AI bias conversation, even though disabled people are affected by these issues in differing ways across axes of identity. According to the Center for Democracy and Technology, “AI-powered hiring tools often fail to include people with disabilities when generating their training data” – as a result, algorithms which are modeled based on a company’s previous hires do not reflect their potential.

EEOC Chair Burrows has stated, “As many as 83% of employers, and as many as 90% among Fortune 500 companies, are using some form of automated tools to screen or rank candidates for hiring.” Even if algorithms account for disabilities, they do not account for the diversity of experiences that individuals with disabilities bring to the table. We must address these harmful AI-based assessments due to algorithmic bias against disability rooted in institutional and systemic ableism and assumptions about employability.

We applaud EEOC Chair Burrows for the agency-wide initiative that the EEOC launched in 2021 designed to ensure that the use of AI in hiring decisions complies with the civil rights laws that the EEOC enforces and also for the recent joint release of a Q&A document from EEOC and the Department of Justice on the impact of AI and the ADA. The guidance is an important step, and we urge the administration to take immediate steps to make sure this guidance is followed.

Your leadership and expertise at the intersection of technological advancement and civil rights is crucial to ensuring that the harmful effects of algorithmic bias are mitigated through regulation, oversight and review. We are glad that the FTC, EEOC and DoL are taking steps to increase transparency and create accountability with regards to algorithmic-based decision-making, particularly when people’s rights are at risk. To better understand the administration’s framework and plan for addressing these important issues, we respectfully request a briefing from FTC, EEOC and DoL to receive an update on your work and to discuss your answers to the following questions:

  1. What is the extent to which algorithmic bias discriminates against people with disabilities today, especially as it relates to hiring practices and worker productivity tools that use AI?
  2. How many complaints has your agency received relating to the effect of AI-based, automated tools on people with disabilities? How does your agency ensure that it is capturing the full impact of automated tools on people with disabilities?
  3. Are there industries in which this type of discrimination would be particularly pervasive? What are these industries and what are the factors that create those conditions?
  4. What hurdles and/or issues exist in ensuring that algorithmic bias that discriminates against people with disabilities is mitigated?

We also respectfully request an answer from Chair Khan to the following question:

  1. What are the actions being taken by the FTC to ensure that algorithmic bias does not discriminate against people with disabilities? Please comment on the FTC’s position regarding discrimination on the basis of protected characteristics, particularly with respect to the use of algorithms, and how this constitutes an unfair or deceptive act or practice.

We urge you to ensure disability is a fundamental component of any algorithmic bias review conducted, and that the FTC, EEOC and DoL, including the Office of Disability Employment Policy, work closely together, and with us, to advance and enforce policies that advance the employment and economic well-being of Americans with disabilities. Thank you in advance for your consideration of our request. We look forward to receiving your timely response.

Sincerely,

-30-