Partnerships between academic researchers and implementing organizations produce opportunities for both parties to advance shared interests, learn about their own areas of expertise, and add to the growing network of individuals and institutions dedicated to using rigorous research to inform program design. For researchers who conduct field experiments, research partnerships can enable randomized experiments in real world settings and can thus speed scientific learning while enhancing public welfare For implementing organizations, such partnerships can speed learning about organizational objectives both by taking advantage of the latest in research methodology and academic literature on a topic and adding a new perspective to a team, but also because academic researchers’ incentives involve peer reviewed publication and fairly rigorous transparency practices –- all of which add up to evaluations and research designs, which, in principle should provide more actionable information to a NGO or government.

There is no set way to identify, enter into, and conduct a research partnership. There are, however, lessons learned from previous partnerships. These lessons have led some organizations to standardize the process through which they engage academic researchers as partners. The following summary is part of an ongoing series on Academic/Practitioner Partnership Models. In each entry, we lay out the structure of one organization’s model for engaging in research partnerships, discuss the choices made in devising that model alongside the reasons for doing so, and outline the lessons learned from the implementation of the model.

If your institution or partner organization uses a different model for research partnerships, or wishes to add to the conversation, please contact us at admin@egap.org so we can learn more about that model and add it to the series.

Model: USAID Center of Excellence for Democracy, Human Rights, and Governance

In 2012, USAID transformed its Democracy and Governance Office into a Center of Excellence for Democracy, Human Rights, and Governance. This new Center aimed to add to the evidence base for what works and what does not in Democracy, Human Rights, and Governance (DRG) development programming via rigorous testing of USAID DRG programs. For reasons of accountability, but also to provide a public good to the global DRG community as the largest donor of Democracy and Governance assistance, USAID embarked on a program of designing, fielding, and public reporting of RCTs testing some of the theories of change (causal mechanisms) around which the development community had the most uncertainty.

From 2012 to 2019 more than 25 RCTs were designed with USAID programs around the world (in addition to several other forms of research such as longitudinal studies and process evaluations). The RCTs generated knowledge about core DRG programming implemented around the world. Research questions ranged from the effects of local market services on willingness to comply with taxation to the efficacy of radio docu-dramas in reducing propensity to violence in fragile settings, to the effects of various forms of electoral messaging on political behavior, and many more.

This brief outlines the process the DRG Center employed to define the questions of interest, carry out this research, and use RCTs to improve programming.

Clinics

A key part of the outset of each IE was a week long workshop that we called an IE Clinic (Ed: see here for a summary of one of the Clinics). These Clinics brought together USAID field offices (Missions) who were in the early stages of program design with academics whose research focused on the type of DRG program USAID was designing (ex. increasing political participation of women and youth, increasing electoral accountability of elected officials, etc.). During a week-long Clinic, mornings were spent discussing the existing evidence, the program design antecedents, and methods/basic statistics for causal inquiry. Afternoons were spent in working groups made up of the USAID field officers, the academic identified to work with them, and a DRG Center Learning officer. In these workshopping sessions, a research question of interest to both the academic and the USAID Mission was identified, and (usually) a sub-component of the (larger) program was designed to test the identified causal mechanism via randomized control trial (quasi-experimental designs were discussed, but, in the end, none were chosen as a final research design). The working group sessions focused on answering questions to guide the research design and to grapple ahead of time with some of the trade-offs inherent to applied field experiments. In these sessions, USAID staff and academics discussed questions such as: what is the population of interest; how can we randomize ethically (without denying the program unduly to vulnerable populations); what information do we have to estimate likely effect sizes, etc. This process greatly advanced the design and headed off some of the usually more vexing trade-offs. (Ed: for more information on the up-front conversations to have with your research partnerships, see the MIT GOV/LAB’s guide to difficult conversations.) It also resulted in programs designed with the latest evidence in mind–what we know about how public information campaigns reach people, what we know about about the best way to encourage young people to run for office, or what we know about why legislators show up to vote, etc.

Procurement

Having built an intervention arm, or factorial treatment arms, into the larger USAID program and having determined an appropriate population and identification strategy, the USAID officers returned to post to continue the process of overall program design and mission approval. Throughout the process, they stayed in touch (weekly or bi-weekly) with the academic and DRG Center Learning Officer to continue to work out the design of the intervention, and also to work with the USAID mission on the procurement of the program itself- that is to say, the contracting process by which USAID hires a firm or NGO to carry out the intervention to be tested. Involving the academic in this process helped the USAID Missions by carrying some of the burden for program design, helping ensure the design’s technical parameters would stay intact to answer the question identified, and infusing the program with an evidence base from the academic literature.

Implementation

Once the USAID Officer is finished with program design, there was then an award made to a firm or NGO to implement the program, with the RCT design embedded within. As the program/intervention was launched, we found a helpful practice was to have a workshop with the awardee as early as possible to explain to those implementing what an RCT is, why USAID is interested in pursuing this learning through that particular project, and to incorporate information from the implementer and hear any concerns they have early on. Because of the relative infrequency of RCTs in large donor DRG programs, we often faced some trepidation about why USAID was interested in conducting these tests and sometimes uncertainty over how the trial would affect the implementer’s ability to reach their goals with the project. In our experience, these questions could always be answered, but it was an important conversation to have early on (and was usually revisited, especially if new members joined the project team).

As the intervention was carried out, the academic(s), DRG Learning team officer, USAID field office, and program implementer stayed in close touch to ensure the program was indeed following the research design protocol and to make adjustments as unexpected issues arose. In addition, this frequent communication was critical for understanding how the intervention unfolded on the ground, for gathering any ancillary data (outside of the research team’s own data collection) from the implementing organization, and for planning and tracking the timing of the research team’s rounds of data collection.

Lessons Learned

Overall, we learned that while this took some time at the outset of the program, the attention paid to careful program design and realistic expectations was already a major win for USAID. Moreover, by working continuously with their academic partner(s), USAID field Missions were able to make smart programming choices as unexpected events occurred mid course.

While this process was new to many and did require extra time on the part of the USAID field officers, those involved came away with a great sense of pride in the research itself and pride in the knowledge that they had designed and implemented a program based on the best available evidence, with eyes wide open to probable effect sizes (as opposed to wishful thinking), that contributed to the broader evidence base. Many have taken this model on to other parts of USAID and findings have begun to influence programming beyond the individual programs tested. The methods for creating, nurturing and harvesting the fruits of collaboration that we pioneered continue to develop and are in practice elsewhere in the US government and global development community as well.

Going Forward

The idea of capitalizing on development interventions to learn what truly works in development is not unique- NGOs such as Mercy Corps, Equal Access, Search for Common Ground, and others have taken it upon themselves to use RCTs to test their theories of change. Bi-lateral donors, however, given the scale and frequency with which they implement programs, have a unique role in evidence generation, and since they are publicly funded it can be argued they have a duty to add to the evidence base.

In awarding the Nobel prize, the Royal Swedish Academy of Sciences underscored they chose scholars who have catalyzed this experimental approach to obtaining concrete answers because that research and those answers dramatically improve our ability to fight poverty in practice:1

This year’s Laureates have introduced a new approach to obtaining reliable answers about the best ways to fight global poverty. In brief, it involves dividing this issue into smaller, more manageable, questions – for example, the most effective interventions for improving educational outcomes or child health. They have shown that these smaller, more precise, questions are often best answered via carefully designed experiments among the people who are most affected.

By including these evidence-generating studies in even a small subset of total programming, a bi-lateral donor is in a position to build up a critical mass of knowledge about what works - and how it works best - in development programming. The model described above is an example of how a bi-lateral donor can work with academia to embed extant evidence into, and create future research with, development interventions at very little additional cost to them. By leveraging their substantial programmatic weight, bi-lateral donors can add to the evidence base about what works in international development, and improve the efficacy of their own programming at the same time.


  1. https://www.nobelprize.org/prizes/economic-sciences/2019/press-release/