Skip to content


USAID’s Equitable AI Challenge: Addressing Gender Inequity in Artificial Intelligence

Our work is helping to mitigate AI gender bias in one of Mexico's leading educations pilots - and providing important insights for the wider development sector to enable more equitable, inclusive, and transparent programming.


Client: USAID and Digital Frontiers
Dates: September 2022 – April 2023
Country: Mexico, India and Global

As the use of AI in development solutions becomes more prevalent, there is a risk of embedded bias in the design and end-use of tools generating inequitable outcomes across genders.

USAID’s challenge aimed to source innovative approaches to prevent and respond to gender inequity in AI. This project, which is being run under Digital Frontiers, sought to help decision-makers (such as designers, buyers, sellers, regulators, and users of AI technology) identify and address actual and potential gender biases in artificial AI tools.

Our role

Alongside Athena Infonomics, PIT Policy Lab, and Women in Digital Transformation, we worked with AI tools being used in Guanajuato, Mexico to improve student outcomes as part of a World Bank project.

We supported the Mexican Government to understand bias in these datasets and provided toolkits to address these challenges. Learnings from the process were also shared in India to better explore the use of these toolkits in different environments.

The state of Guanajuato, Mexico, has a population of over 6 million, of which 51% of inhabitants are women. Its growing economy has seen a successful transition from traditional agriculture-based industries to more complex industries such as information technology and electronic.

Under the current administration (2018-2024), the Ministry of Education in Guanajuato has been working to ensure valid, reliable and up-to-date information about the state of the local education system. To do this, they have been developing and harmonizing databases with information regarding student attainment and overall school performance.

To identify at-risk students in higher education, the Ministry partnered with the World Bank, within a pioneering initiative called Educational Trajectories, to create an AI-based early warning system against school drop out. This approach aims to provide at-risk students with support and improve retention and graduation rates.

During the second half of 2022, the Ministry began training the algorithm with government data to get preliminary findings on its performance.

Methods and approaches

Itad with consortium partners supported this project by creating awareness about the importance of mainstreaming a gender perspective in Educational Trajectories’ upcoming phases. This included specifically accounting for and mitigating potential gender-based bias in both the databases and interventions resulting from the development of the AI early alert system.

We used a three-pronged strategy to support the Educational Trajectories team. We:

Building on our engagement in Mexico, we shared the implications of our work with other government representatives in Latin America. We also presented critical findings to state government leaders from Uttar Pradesh, Andhra Pradesh, Telangana, and Tamil Nadu in India — encouraging replicability of AI approaches in other regions while sharing lessons learned from the Mexico experience.

Outcomes and impact

AI Ethics Guide and Checklist

Our AI Ethics Guide and Checklist was published in July 2023. Together these resources are helping to ensure policymakers in Guanajuato understand the potential for AI systems to reinforce existing biases, replicate privacy violations, or simply exclude populations.

The AI Ethics Guide presents a broad overview of what AI is, the ethical concerns it creates, and how they can be addressed at national, sub-national, and municipal levels. To illustrate ethics concerns, the guide presents several case studies and provocative questions that allow decision-makers to reflect on the responsible use of AI in government systems. To support knowledge building, the guide also includes a glossary of AI terminology derived from USAID learning studies and a comprehensive literature review with varied county approaches to using AI for public services.

The Checklist for AI Deployment is a separate yet interconnected tool for policymakers and technical teams preparing to deploy or already deploying AI systems. The document seeks to inform policymakers on starting points for building ethical AI systems as well as prompt technical experts to reflect on whether the right ethical guardrails are in place for an AI-based approach. Leading users through six phases, starting from regulatory foundations to the desired functionality of an AI system, the checklist contains questions on regulations, business processes, data collection and use, system design, and decision-making for ethical AI deployment in different situations and contexts.

Policy recommendations

Reflecting on the implementation of the early warning school system against school drop out in Guanajuato, the consortium offered actionable policy recommendations for decision-makers looking to mitigate biases and incorporate gender perspectives into AI systems. These recommendations can be adopted by a diverse range of organisations looking to explore AI and data policies in sectors like education, health, and financial services. Read a summary of the recommendations. 

Going forward

By addressing how AI tools can propagate gender bias and suggesting solutions to unfair outcomes that occur as a result of this challenge, we hope to contribute to improved student performance in Guanajuato, Mexico.

We also hope to that our work, IBM’s AI Fairness 360 toolkit, the AI Ethics Guide, and the AI Checklist benefits other development and humanitarian programmes, enabling them to better mitigate bias in low- and-middle-income country datasets while ensuring their AI projects become more equitable, inclusive, and transparent.

Team members