Artificial intelligence (AI) reflects the world we live in, and AI tools are only as equitable as the underlying data they are trained on, including all of the inherent societal biases embedded in that data. As the use of AI proliferates, many of these tools and approaches can inadvertently generate inequitable outcomes across genders due to biases embedded in AI technology through data collection, model design, or end-use applications. From frequently rejecting women business owners’ credit applications to completely leaving out young girls when predicting school dropout rates, it’s clear these systems can inadvertently harm those who already struggle to gain equal access to opportunities. These risks have the potential to grow — leading to greater harm as AI continues to proliferate worldwide. These inequities require creative solutions to ensure that everyone has a chance to benefit from AI technology. To foster an equitable and inclusive digital ecosystem, more efforts are needed to identify innovative and timely approaches to help decision-makers address gender biases, harms, and inequitable outcomes resulting from AI technology.
Meet our Winners
In October 2022, USAID and Digital Frontiers awarded five grantees to implement their approaches in alignment with the challenge’s objectives. In this blog, learn more about the winners of the Equitable AI Challenge and their creative methods for addressing some of AI’s most urgent gender equity challenges.
A Due Diligence Tool for Investors to Examine Algorithms
Investors play an important role in digital ecosystems and the trajectory of future innovations . Nonetheless, investors have little visibility on how companies use algorithms and AI in their financial tools. For example, AI algorithmic credit scoring to expand access to loans or target financial products may be well-intentioned but have inequitable results. The Accion Center for Financial Inclusion (CFI) developed a due diligence tool to help investors make better, gender-informed decisions about where and how to invest their money and establish means to verify whether any algorithmic tools perpetuate women’s historical marginalization within the financial sector. The tool allows impact investors and donors to push product designers to build better, more equitable processes for algorithm development. Through this approach, CFI reassures investors that their funding towards artificial intelligence products and services do not exacerbate women’s inequity.
Dive into the Equitable AI for Inclusive Finance Guide and Brief.
Dive into our blog around this Inclusive Finance Guide and Brief.
Gender Accessibility in Health Chatbots
Mobile chatbots are an increasingly common way for healthcare companies to reach a greater number of patients. They can reduce the burden on healthcare providers by automatically triaging patients based on their symptoms, or provide rapid healthcare advice for routine health concerns. These chatbots often rely on a type of AI—Natural Language Processing (NLP)—to engage patients in an automated way. However, if not designed with different genders in mind, these chatbots may not appropriately register the symptoms women are experiencing or provide advice that is well-suited for female health conditions. Through this project, the University of Lagos and Nivi partnered to create a gender-aware auditing approach within their existing health chatbot technology deployment in Nigeria.
The team’s first step involved addressing and mitigating inequities presented in the data used to feed the chatbot’s underlying AI model. Then, the team introduced human auditors to test and correct the chatbot’s responses to ensure they were relevant and accurate. User feedback was also gathered and subsequently incorporated into chatbot programming to better respond to user needs. By enhancing a tool that is more attuned to the needs of each local population, Nivi’s health chatbot and digital health services will ideally reach more women, help them make better-informed health decisions, and reduce overall healthcare costs.
Learn more about Nivi and UNILAG’s data collection and auditing process here.
Improving Access to Credit with Gender-differentiated Credit Scoring Algorithms
For those without formal financial records or history with banks, AI is increasingly being used to determine a loan applicant’s creditworthiness based on alternative data. This data may encompass information such as someone’s mobile phone usage or whether they capitalize the names of their contacts. While this approach may seem gender neutral, these alternative credit-scoring models tend to pool data from men and women and, because of historical gender biases, this can place women at a disadvantage when seeking access to credit. The University of California-Berkeley, Northwestern University, and Texas A&M University partnered with RappiCard Mexico to address this challenge. The partnership developed an AI model that differentiates creditworthiness between men and women, aiming to increase fairness and credit scoring efficiency. This research aims to inform policymakers and practitioners about whether (and how) gender-aware, rather than gender-neutral, algorithms are fairer and more effective for women seeking access to credit. The study findings will be shared with RappiCard, as well as other fintech partners, to apply the algorithm in their digital credit products.
Preventing Gender Bias in AI-based Early Alert Systems in Higher Education
Governments around the world are increasingly turning to AI for automating service delivery, increasing efficiency, and improving transparency. While there are meaningful cases of how government agencies can use AI to address pressing social challenges, such AI-based tools are vulnerable to exacerbating existing biases and producing discriminatory outcomes. Itad—in partnership with Women in Digital Transformation, PIT Policy Lab, and Athena Infonomics—worked with the Mexican State of Guanajuato's Ministry of Education to review an AI-based early alert system, the Early Action System for School Permanence (SATPE), which aimed to improve school retention and graduation rates by identifying at-risk students and providing them support under the Educational Trajectories initiative.
Leveraging IBM’s open-source AI Fairness 360 Toolkit, the consortium identified a critical gender bias that would have prevented the model from accurately identifying up to four percent of at-risk girls who were in jeopardy of interrupting their studies. In short, four out of 100 girls would have missed the help they needed to keep them in school. Based on these findings, the consortium team took steps to mitigate this bias within the Ministry’s databases before feeding it back through the AI-based alert system on a pioneering initiative called Educational Paths, to identify and mitigate gender bias within Guanajuato’s new AI-based early alert system. This system uses AI to better identify higher education students who are at-risk of failing. The grant will support Educational Paths to identify and mitigate potential gender bias in the government’s databases and AI performance. Specifically, the team will develop an Ethical Guide and Checklist for decision makers to ensure responsible and equitable deployment of AI systems. They will also leverage the AI Fairness 360 toolkit from IBM to detect AI bias, providing the toolkit to the Ministry of Education. The learnings from this case study will be presented in a workshop with stakeholders from the Ministry of Education in the state of Tamil Nadu, India, to explore how lessons learned from the Mexico experience could transfer to the Indian context.
The potential for AI systems to reinforce existing biases, replicate privacy violations, or simply exclude populations propelled the consortium to develop an Ethical Guide and Checklist to ensure policymakers in Guanajuato understood the risks of AI. The AI Ethics Guide presents a broad overview of what AI is, the ethical concerns it creates, and how they can be addressed at national, sub-national, and municipal levels. To illustrate ethics concerns, the guide presents several case studies and provocative questions that allow decision-makers to reflect on the responsible use of AI in government systems.
The Checklist for AI Deployment is a separate yet interconnected tool for policymakers and technical teams preparing to deploy or already deploying AI systems. The document seeks to inform policymakers on starting points for building ethical AI systems as well as prompt technical experts to reflect on whether the right ethical guardrails are in place for an AI-based approach.
The learnings from this case study were presented in a workshop with stakeholders from the Ministry of Education in the state of Tamil Nadu, India, to explore how lessons learned from the Mexico experience could transfer to the Indian context.
Dive into the AI Ethics Guide and Checklist for decision-makers to ensure responsible and equitable deployment of AI systems.
As a result of this work, the Itad Consortium won the 2023 EQUALS in Tech Awards under the Research Category. Every year, the EQUALS Global Partnership presents the EQUALS in Tech Awards to initiatives, projects, movements, organizations, and institutions around the world working to bridge the gender digital divide.
Evaluating Gender Bias in AI Applications using Household Survey Data
Household survey data are increasingly being used to build AI tools that can better estimate poverty around the world. However, if the underlying data is biased, any AI tools built from this data will reflect those biases. William & Mary University’s AidData, in partnership with the Ghana Center for Democratic Development (CDD-Ghana), evaluated the impact of gender bias on poverty estimates generated using AI and USAID’s Demographic and Health Surveys (DHS) data. This project aimed to inform AI developers, researchers, development organizations, and decision-makers who produce or use poverty estimates. Using AidData’s expertise in AI, geospatial data, and household surveys—as well as CDD-Ghana’s knowledge of the local context—this project produced a novel public good that elevates equity discussions surrounding AI tools in poverty alleviation. Overall, this work encouraged deeper consideration for potential bias in the data and resulting AI models developed, while also providing a practical roadmap to evaluate bias in other applications.
The Implementation Journey
Through these diverse concepts spanning geographic regions and types of approaches—from improving AI fairness tools and data systems to strengthening the evidence base for AI fairness in development contexts to developing and testing more equitable algorithms—the winners of the Equitable AI Challenge helped USAID and its partners better address and prevent gender biases in AI systems in countries where USAID works.
These awardees worked with USAID and its partners to implement their approaches and generate new technical knowledge, lessons learned, and tested solutions in addressing gender bias in AI tools. Through this implementation phase, USAID sought to foster a diverse and more inclusive digital ecosystem where all communities can benefit from emerging technologies like AI and—most importantly—ensure that all members of these communities are not harmed by these technologies. This effort will inform USAID and the development community; provide a greater understanding of AI fairness tools and approaches; better determine what is captured and what is left out; and inform what tactics are needed to update, adapt, and socialize these tools for broader use.
Forming an Equitable AI Community of Practice
USAID, in partnership with DAI’s Digital Frontiers and the Atlantic Council's GeoTech Center, established the Equitable AI Community of Practice to cultivate dialogue, foster community, and drive actions among Equitable AI Challenge participants, government, the private sector, researchers, and the larger AI community interested in advancing gender equity in AI. Building the community through six virtual sessions focused on various topics related to equitable AI, the Community of Practice now includes five grantees, numerous partnerships, and a LinkedIn group supporting sharing and dialogue with over 400 Community of Practice members representing over 30 countries.
The value of building a Community of Practice around nurturing equitable AI is that it establishes a platform for leading experts to collaborate directly with practitioners across the public and private sectors. Bringing in MIT, AI Now Institute, the Oxford Internet Institute, Women at the Table, Research ICT Africa, among other illustrious institutions, the Equitable AI Community of Practice galvanized conversations between government, industry, civil society, and academia. Most importantly, the community sought to welcome new perspectives around humanitarian assistance, AI governance, and gender representation in AI to build upon the Challenge.
The Community of Practice stressed the need to explore the opportunities, limitations, and tensions presented by AI as a digital solution. AI experts, advocates, and enthusiasts highlighted the need to integrate new approaches, voices, and backgrounds into developing AI systems. At a pivotal time for AI, the Community created a space where AI practitioners could envision a world where technology systems promote equity and justice — inspiring others to take the lead toward a more equitable future.
To keep up with the Equitable AI Community of Practice, join USAID’s LinkedIn Group.
For past Equitable AI Community of Practice sessions, visit our YouTube page or check out our newsletter landing page.
About the Competition
Launched in 2021, the U.S. Agency for International Development (USAID) Equitable AI Challenge, implemented through DAI’s Digital Frontiers, invested in innovative approaches to help identify and address actual and potential gender biases within AI systems, in particular those relevant to global development. USAID supported approaches to increase the identification, transparency, monitoring, and accountability of AI systems so that their outputs do not produce gender-inequitable results. In the first iteration of the Challenge, Digital Frontiers disbursed five grants using over $570,000 in award funding.
USAID chose 28 diverse semi-finalists to attend a three-week virtual co-creation event, which took place from February 14 to March 1, 2022. The event brought together select technology firms, startups, small and medium enterprises, civil society organizations, and researchers from around the world. The co-creation focused on the need for close collaboration between the public and private sectors, which allows for diverse perspectives, local solutions, and partnerships to form among AI technology developers, investors, donors, and users. With a desire to address AI’s most critical issues, including bias and inequity within AI systems, participants were encouraged to collaborate on solutions, identify partnerships, and strengthen their proposals—all while forming a larger community of practice.