USAID stewards public resources to promote sustainable development in countries around the world. Reflecting the intent of the authorizing legislation of the U.S. Agency for International Development (the Foreign Assistance Act of 1961, as amended), USAID pursues this goal through effective partnerships across the U.S. Government, with partner governments and civil society organizations, and with the broader community of donor and technical agencies. The Agency applies the Paris Declaration principles of ownership, alignment, harmonization, managing for results, and mutual accountability.
To fulfill its responsibilities, USAID bases policy and investment decisions on the best available empirical evidence, and uses the opportunities afforded by program implementation to generate new knowledge for the wider community. Moreover, USAID commits to measuring and documenting program achievements and shortcomings so that the Agency’s multiple stakeholders gain an understanding of the return on investment in development activities.
USAID recognizes that evaluation, defined in Box 1, is the means through which it can obtain systematic, meaningful feedback about the successes and shortcomings of its endeavors. Evaluation provides the information and analysis that prevents mistakes from being repeated, and that increases the chance that future investments will yield even more benefits than past investments. While it must be embedded within a context that permits evidence-based decision-making, and rewards learning and candor more than superficial success stories, the practice of evaluation is fundamental to the Agency’s future strength. This policy builds on the Agency’s long and innovative history of evaluation. Since the 2011 release of the Evaluation Policy, USAID has worked to improve both the quantity and quality of its evaluations, to inform development programming that ultimately achieves better results. The number of commissioned evaluations has rebounded from an annual average of about 130 in the five years prior to the 2011 Evaluation Policy, to an annual average of 194 in recent years. USAID also continues to strengthen methodological rigor, improve evaluation quality, and increase utilization of its evaluations. The Agency offers classroom training in evaluation as well as a number of resources to improve the methodological quality, objectivity, access to evaluation findings, and use of evaluation conclusions for decision-making.
This policy responds to today’s needs. High expectations exist for respectful relationships among donors, partner governments, and beneficiaries. Many stakeholders are demanding greater transparency in decision-making and disclosure of information. Development activities encompass not only the traditional long-term investments in development through the creation of infrastructure, public sector capacity, and human capital, but also shorter-term interventions to support and reinforce stabilization in environments facing complex threats. All of these features of the current context inform a policy that establishes higher standards for evaluation practice, while recognizing the need for a diverse set of approaches.
This policy is intended to provide clarity to USAID staff, partners, and stakeholders about the purposes of evaluation, the types of evaluations that are required and recommended, and the approach for designing, conducting, disseminating, and using evaluations. Intended primarily to guide staff decisions regarding the practice of evaluation within programs managed by USAID, it also serves to communicate to implementing partners and key stakeholders USAID’s approach to evaluation.
This policy draws in significant ways on the evaluation principles and guidance developed by the Organization for Economic Cooperation and Development (OECD) Development Assistance Committee (DAC) Evaluation Network. In addition, the policy is consistent with the Department of State Evaluation Policy, and USAID will work collaboratively with the Department of State Bureau of
Resource Management to ensure that the organizations’ guidelines and procedures with respect to evaluation are mutually reinforcing. USAID also will work closely with the Department of State’s Office of the Director of U.S. Foreign Assistance in its efforts to strengthen and support sound evaluation policies, procedures, standards, and practices for evaluation of foreign assistance programs.
Finally, this policy helps to implement the Foreign Aid Transparency and Accountability Act of 2016 and the Foundations for Evidence-Based Policymaking Act of 2018 for USAID and works in concert with existing Agency policies, strategies, and operational guidance, including those regarding project and activity design, evaluation-related competencies of staff, performance monitoring, knowledge management, and research management. The policy is operationalized in USAID’s Automated Directives System (ADS) Chapter 201 Program Cycle Operational Policy.
BOX 1: CONCEPTS AND CONSISTENT TERMINOLOGY
To ensure consistency in the use of key concepts, the terms and classifications highlighted below will be used by USAID staff and those engaged in USAID evaluations.
Evaluation is the systematic collection and analysis of data and information about the characteristics and outcomes of one or more organizations, policies, programs, strategies, projects, and/or activities conducted as a basis for judgments to understand and improve effectiveness and efficiency, timed to inform decisions about current and future programming. Evaluation is distinct from assessment (which is forward-looking) or an informal review of projects.
- Impact Evaluations measure changes in a development outcome that are attributable to a defined intervention, program, policy or organization. Impact evaluations use models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other than the intervention that might account for observed changes. Impact evaluations in which comparisons are made between beneficiaries that are randomly assigned to either a treatment or a control group provide the strongest evidence of a relationship between the intervention under study and the outcome measured. Impact evaluations must use experimental or quasi-experimental designs. All impact evaluations must include a cost analysis of the intervention or interventions being studied.
- Performance Evaluations encompass a broad range of evaluation methods. They often incorporate beforeand-after comparisons, but generally lack a rigorously defined counterfactual. Performance evaluations can address descriptive, normative, and/or cause-and-effect questions. Performance evaluations can focus on what a particular project or program has achieved (at any point during or after implementation); how it was implemented; how it was perceived and valued; and other questions that are pertinent to design, management, and operational decision making. Performance Evaluations include the following types of evaluations: Developmental Evaluation, Formative Evaluation, Outcome Evaluation, and Process or Implementation Evaluation.
- Performance Monitoring is the ongoing and systematic collection of performance indicator data and other quantitative or qualitative information to oversee implementation and understand progress toward measurable results. Performance monitoring includes monitoring the quantity, quality, and timeliness of activity outputs within the control of USAID or its implementers, as well as the monitoring of activities, projects, and strategic outcomes expected to result from the combination of these outputs and other factors.
- Performance Indicators measure expected outputs and outcomes of strategies, projects, or activities based on a Mission’s Results Framework or project or activity logic model. Performance indicators help answer the extent to which a Mission or Washington Operating Unit is progressing toward its objective(s), but alone cannot tell a Mission or Washington Operating Unit why such progress is or is not being made.
- Performance Management is the systematic process of planning and defining a theory of change and associated results through strategic planning and program design, and collecting, analyzing, and using information and data from program-monitoring, evaluations, and other learning activities to address learning priorities, understand progress toward results, influence decision-making and adaptive management, and ultimately improve development outcomes.