OCR Warns of Discrimination Risks Caused by Use of AI

Lozano Smith Client News Brief
January 2025
Number 3

The U.S. Department of Education’s Office for Civil Rights (OCR) published new guidance, “Avoiding the Discriminatory Use of Artificial Intelligence,” highlighting how schools’ use of artificial intelligence (AI) could lead to civil rights violations related to racial, sexual, and disability discrimination. The guidance document provides examples of AI-related scenarios that might trigger OCR investigations under federal civil rights laws. While AI tools can enhance educational activities, from scheduling assistance to student assessment, they also come with risks of violating students’ civil rights. OCR warns that these technologies may inadvertently discriminate against students based on protected characteristics, particularly when the AI systems rely on historical data that reflects past discriminatory practices.

OCR AI Guidance

While the guidance released by OCR does not create new legal requirements, it explains how OCR might investigate complaints about AI use under existing civil rights laws, including Title VI of the Civil Rights Act of 1964 (race, color, and national origin discrimination), Title IX (sex discrimination), and Section 504 of the Rehabilitation Act of 1973 (disability discrimination). The guidance provides numerous examples of hypothetical AI use cases that would be grounds for an OCR investigation. These examples are not exhaustive of all potential AI uses that could trigger an OCR investigation and are not meant to imply that the examples would necessarily result in OCR ultimately finding a violation. The examples are useful to guide officials in understanding what types of AI uses might constitute civil rights violations and taking precautionary steps to avoid problematic AI uses.

The following are some of the examples included in the guidance:

  • The first example provided by OCR addresses the increasingly common practice of teachers turning to AI detection software that claims to identify computer-generated text and plagiarism and how such tools could result in discrimination. In this example, a teacher used an AI tool to check student assignments for plagiarism and unauthorized use of generative AI, but the tool disproportionately flagged assignments that were authored by non-native English speakers. The school principal remained inattentive to parents’ complaints claiming that their children did not plagiarize nor used AI. OCR stated that these circumstances could lead to a civil rights investigation.
  • In another example, a school district used a facial recognition technology for safety reasons. AI technology continuously misidentified Black individuals and flagged them as suspicious. As a result, those students were constantly questioned by the officer, distracted from classes, and embarrassed in front of other students and teachers. OCR stated that these circumstances could lead to a civil rights investigation.
  • In another example, a school district used AI proctoring software to detect students cheating during exams. The software used facial recognition and eye movement tracking to identify patterns consistent with cheating, but it was unable to distinguish cheating eye movements from similar movements caused by disability. As a result, a student with visual impairment was accused of cheating and failed the exam. Again, these circumstances could prompt an OCR investigation.
OCR provided several other examples of AI uses that could trigger civil rights investigations, including:

  • An AI scheduling system that placed only one female student in a computer science class of 35 students based on historical enrollment data showing male students were twice as likely to take the course.
  • A school’s use of AI software to generate student risk scores for discipline based on past disciplinary data, which resulted in harsher punishments for Black students due to historical disciplinary disparities.
  • A school’s use of AI software to write Section 504 plans for students with disabilities that were nearly identical and failed to address individual student needs.
  • AI program that inappropriately flagged students requiring frequent restroom use (including those who were menstruating, pregnant, or had medical conditions) for mandatory counseling.
  • Ineffective AI-based language tools, including an English development program that failed to improve student skills and unreliable translation services that prevented meaningful communication with limited English proficient parents.
Takeaways

To avoid discriminatory use of AI technology, schools should keep in mind the following principles.

  1. Weighing AI benefits and flaws before purchasing. Schools should communicate with AI vendors regarding technology accuracy, types of data technology use for learning, and tools to make conclusions nondiscriminatory. Schools should consider avoiding tools where discriminatory uses or outcomes cannot be avoided or mitigated by human intervention.
  2. Human in the loop.” School employees should review and control all information generated by AI tools and intervene if the information is inaccurate or the use of it would be discriminatory.
  3. Proper investigation. Each complaint regarding the discriminatory impact of AI tools should be properly addressed and thoroughly investigated.
If you have any questions about OCR’s guidance on use of artificial intelligence in schools, or about civil rights compliance in general, please contact one of the authors of this Client News Brief or any attorney at one of our eight offices located statewide. You can also subscribe to our podcasts, follow us on Facebook, X (formerly Twitter), and LinkedIn, or download our mobile app.
 
Share this Post:

As the information contained herein is necessarily general, its application to a particular set of facts and circumstances may vary. For this reason, this News Brief does not constitute legal advice. We recommend that you consult with your counsel prior to acting on the information contained herein.