New York City, New York
[Remarks as Prepared]
DEPUTY ADMINISTRATOR ISOBEL COLEMAN: Thank you, Eric [Loeb]. It’s so wonderful to be here today to talk about the potential of AI in development, specifically in the Global South.
I’d like to start by sharing a story. Back in 2019, health officials in India urgently wanted to find new, better approaches for tuberculosis control – in part because the Indian government had committed to eliminating the disease by 2025. India accounts for a quarter of the world’s tuberculosis cases, and medical professionals were facing a challenge beyond the normal treatment difficulties. New strains of TB were emerging, strains resistant to traditional treatment – making it harder for doctors to distribute effective care, and threatening to undo the government’s progress.
The government of India needed a solution, so they partnered with USAID, which brought decades of public health experience to the table and experience with advanced technology which could help identify trends and patterns that otherwise might be missed. Together, USAID and the Indian government worked with the Wadhwani Institute for Artificial Intelligence on a project called TRACE-TB. Powered by AI, TRACE-TB uses demographic and clinical data of patients with tuberculosis to help medical professionals improve their diagnosis and treatment. Through an app on a mobile phone, health workers had patients with suspected cases of TB simply cough, and captured the sound on the phones. Then, AI-enabled analysis of those coughs helped health workers detect TB earlier and more reliably than traditional methods. And AI models trained on disease diagnostic strips – which, like COVID tests, use a patient’s sample to diagnose the disease – helped doctors automatically classify a patient as positive or not, and simultaneously indicated which strain of TB the patient had. All of this saved a lot of time, time that allowed doctors not only to adapt treatment regimens accordingly and without delay, but also to see more patients.
And as importantly, TRACE-TB helps keep patients on their treatment regimens. Because some tuberculosis protocols can require treatments over the course of several months to a year – meaning more than one visit to the doctor – TRACE-TB uses data such as location, patient age, and gender to help identify patients who are more likely to fall off their treatment, so that doctors can target those patients to receive treatment until they are cured. In the last year, TRACE-TB has screened over 100,000 individuals through its cough detection app – identifying more than 15,000 TB cases and increasing patient diagnosis by 12 percent. And, it has helped identify more than 26,000 patients who need more intensive care, resulting in a 16 percent decrease in negative outcomes like discontinued treatment plans, permanent lung damage, and even death.
This is just one remarkable example of how we can use the power of AI to help solve big challenges. And, we are not short on big challenges. Populations are growing, meaning every year more people need help than ever before. Diseases are evolving, meaning that treatments are becoming more difficult. And the climate is changing, meaning that people around the world are dealing with hotter temperatures, stronger storms, and increased resource scarcity. And much of this change is happening in the Global South.
So, we know we need all hands on deck – all tools on deck – to meet the moment.
Artificial intelligence can help. It quickly identifies patterns no humans can see in a lifetime, and these analyses can then inform how we scale our responses. But the truth is that while AI is one of our best bets, there are huge gaps between the Global North and the Global South in terms of capability and capacity for digital tools, specifically AI.
First, AI requires significant digital infrastructure, like reliable internet and functional data centers. But the Global South does not reliably have the digital infrastructure in place to sustainably support AI tools. For many countries in the Global South, accessing cloud computing – which is needed to build robust AI models – can be between ten to thirty times as expensive as in countries in the Global North. And there are massive gaps in power and connectivity, meaning that even powering the computers to build the tool, process the data, and then actually use the tools, is uneven and inconsistent at best. Second, because AI uses data, it mimics the story that the data tell. This especially affects the Global South. For example, the Global South has over 7000 unique languages – with almost a third of those in Africa alone. But most AI models are only trained in a few languages, and built in even fewer. In fact, more than half the datasets used for AI performance analysis across almost 30,000 research papers were from just twelve elite institutions and technology companies located in the United States, Germany, and Hong Kong. Not only does AI often miss information in these unrepresented languages, it also means that people in the Global South that do not speak English – which is, for example, the language ChatGPT is primarily trained on – do not get the same utility from AI tools.
So, while AI offers a watershed opportunity to help us mobilize solutions to growing problems, it also threatens to amplify the digital divide between the Global North and the Global South. USAID is no stranger to this reality. In fact, we have monitored AI in development for over a decade, and used artificial intelligence in our programming since 2018 – and much of this work has been done in developing countries. We’ve learned a lot during this time that informs what we do today. I can say that there are three broad principles guiding our work in AI.
First, we are committed to fostering responsible AI. Responsible AI refers to the process of making sure that at every step of the way, we and the tools we use respect human rights, provide equitable analyses, and comprehensively understand the benefits and risks that AI presents – helping us to make informed decisions about the technology we use, and how we use it. At USAID, we take “responsible AI” very seriously. Let me give you one example from the education sector in Mexico.
Since 2021, the Mexican State of Guanajuato began building an AI tool to reduce the number of students who dropout of school early. In partnership with international organizations, they analyzed years of data from Guanajuato’s Ministry of Education – data like a student’s neighborhood location, their age, and their school performance – to predict which students were more likely than others to drop out of school. Teachers and educational professionals could then use this prediction to provide targeted interventions to help keep these ‘at risk’ students in the classroom. But it turned out the tool reflected a gender bias, and did not produce predictions in a representative way.
Working with various organizations – USAID helped uncover that for every 100 students in the analysis, the AI tool was missing four girls. In real terms, this means that for every 100 students that teachers helped to keep in school, four girls were not receiving assistance, and remained more vulnerable to drop out.
Now, with AI, it is important to understand that decisions that are made in the algorithm are not always clear – this is known in the field as the black box. So using AI responsibly means that in addition to the tool, we have to add in very human elements – making sure that a person is always in charge; making sure that the teams recommending the tool, using the tool, and implementing changes are representative and equitable.
So while the team in Mexico wasn’t able to identify precisely which input was causing the discrepancy, they knew they could correct for it after the prediction, but before the intervention – because AI helps to inform, but does not decide.
When our USAID team and the nonprofit consortium who they were working with dug deeper, they realized that no one on the AI team of Guanajuato’s Ministry of Education – not on the technical team nor the leadership team – was a woman. So, USAID, working with the PIT Policy Lab, supported the Guanawattow government to incorporate AI into their work responsibly. They brought more women on to their teams, both at a technical and a decision-making level. They committed to approaching AI in their work with the lens of equity.
And the nonprofit consortium USAID had assembled went on to produce an AI Ethics Toolkit, a Responsible AI Self-Assessment Checklist, and a Policy Recommendations Brief, all of which they then shared with the government and other stakeholders who were using, or considering using AI tools in their programming.
In the end, the Government of Guanajuato not only started using AI to address challenges it faced with USAID’s support, it started double checking AI, identifying gaps and correcting for them, thereby advancing a more equitable AI while also adjusting the tool to perform better.
The second broad principle that guides our work in AI is strengthening digital ecosystems, from investing in digital infrastructure like open data sets and computing hardware, to supporting a digitally-skilled workforce, in the countries where we operate. One key part of strengthening digital ecosystems is supporting the next generation of AI technologists. So, we have partnered with the Mozilla Foundation to expand its Responsible Computing Challenge, a competitive grant that offers $25,000 to selected institutions of higher education, to support the development of their AI technology curricula – specifically addressing the potential implications of advanced technologies in different cultural contexts.
So far, USAID has awarded eight grants to higher education institutions in Kenya and nine in India, with plans to launch additional grants in Ghana and South Africa later this year. But we well recognize that we can’t do this alone – outcomes.
So, our third broad principle in working with AI is focusing on partnering with the right people and organizations around the world – people like you. This includes supporting resources that can help all of us to inform the ways that we responsibly use AI. One example is the Global Index on Responsible AI. Driven by the Global Center on AI Governance and supported by USAID, the Global Index on Responsible AI analyzes 138 different countries and their use of AI – taking into account nineteen different indicators, including gender equality, cultural and linguistic diversity, and human oversight – to help users understand how AI is used in those countries. Though there are many indexes that look at AI, this is the first one that looks at responsible AI, safeguarding human rights for real people in real communities around the world.
Ultimately – and I know many in this room have heard this before – AI is like a hammer. It can be used to build, but it can also be used to tear down. What we have learned is that it can do both at the same time, reflecting inequities that exist in the real world – and leaving certain places, like the Global South more vulnerable to the risks of AI, rather than the benefits.
So as we listen to the panel today, I urge you all to start your AI discussions clear-eyed about challenges that are exacerbated by AI, and the missed opportunities for inclusion. Instead of asking, “Who is this tool helping?”, consider asking questions like: “Who is not being represented by this data?” and “Who is being harmed by this technology?” And begin your work by addressing those questions. Once we do that, once we are mindful of the risks and harms that AI can create, as well as the benefits and solutions, we can begin to apply AI in equitable, inclusive, and thoughtful ways – ways that support all regions of the world, and the people who need it most.