Partnerships between academic researchers and implementing organizations produce opportunities for both parties to advance shared interests, learn about their own areas of expertise, and add to the growing network of individuals and institutions dedicated to using rigorous research to inform program design. For researchers who conduct field experiments, research partnerships can enable randomized experiments in real world settings and can thus speed scientific learning while enhancing public welfare For implementing organizations, such partnerships can speed learning about organizational objectives both by taking advantage of the latest in research methodology and academic literature on a topic and adding a new perspective to a team, but also because academic researchers’ incentives involve peer reviewed publication and fairly rigorous transparency practices –- all of which add up to evaluations and research designs, which, in principle should provide more actionable information to a NGO or government.
There is no set way to identify, enter into, and conduct a research partnership. There are, however, lessons learned from previous partnerships. These lessons have led some organizations to standardize the process through which they engage academic researchers as partners. The following summary is part of an ongoing series on Academic/Practitioner Partnership Models. In each entry, we lay out the structure of one organization’s model for engaging in research partnerships, discuss the choices made in devising that model alongside the reasons for doing so, and outline the lessons learned from the implementation of the model.
If your institution or partner organization uses a different model for research partnerships, or wishes to add to the conversation, please contact us at admin@egap.org so we can learn more about that model and add it to the series.
The Massachusetts Institute of Technology Governance Lab (MIT GOV/LAB) is a research group and innovation incubator that aims to change practices around corruption, government accountability, and citizen voice. One part of our mission is to produce and promote engaged scholarship, which we define as rigorous research that is co-created by practitioner organizations and grounded in the field, thus increasing the probability that practitioners will make use of the insights and evidence arising from the collaboration. Because the rigor of research alone is not enough to ensure uptake of findings, our engaged scholarship model is based on values of equitable exchange and respect between practitioners and academics.
We focus on sustained, multi-year, and multi-project partnerships between academics and practitioners working iteratively to identify questions that are important to answer and solutions that are important to test. Our approach produces research and evidence that help practitioner organizations make operational decisions as well as provide general knowledge to the larger field. We are committed to fieldwork, and draw on theories and methodologies from a variety of academic disciplines and subfields. See below for a working illustration of our model.
Figure: MIT GOV/LAB Engaged Scholarship Model
The key element in our model is the Engaged Scholarship loop. The Engaged Scholarship loop is based on a classic engineering loop of ideation, prototyping, testing, and redesign. In this loop we introduce both rigor and equity: rigor in the range and relevance of the methodologies applied, and equity in the continuous, structured dialogue with practitioner partners on the choices made as we travel through the loop.
Systematic fieldwork and research to identify and rule out possible intervention designs. The goal in this step is to scope out the parameters of the project. These parameters can include 1) the target population, 2) the modality of delivering the intervention, 3) the time constraints of the intervention, 4) the political and social constraints of the intervention, and other relevant issues. Collaboration at this point might include systematic field research to collect facts, stories and descriptive data as well as analysis of existing administrative and survey data in order to analyze the context, the target population, and the political economy of the problem. It may include looking at the existing evidence base for relevant solutions, and learning from history and other countries. Using existing theories and drawing from multiple disciplines to reason about which ideas are likely to work and which are likely to fail, given the context and political economy, can also be useful here. Collaborative research between practitioners and academics, especially in-depth fieldwork to understand the context and perspectives of people on the ground, is crucial at this point, although it is often overlooked and underappreciated in quantitative research. We recommend making this step as collaborative as possible between the academic and practitioner side, with the practitioner side providing direction for the scope of the project.
Design and implement pilot intervention. In this step, the goal is to design and implement a pilot intervention, developed between practitioners and academics, that takes into consideration research findings from step (a). We recommend letting the practitioner side take the lead in designing the intervention itself, with advice from the academic side, because the intervention has to be consonant with their mission and capacity (see our guide below on how to navigate these conversations). There are important opportunities for academics to contribute to the design by working with practitioners to articulate, clarify, and refine their theory of change for the selected intervention. Academics can also help practitioners learn by co-designing short feedback loops to allow for reflection and mid-course corrections.
Test and analyze the pilot intervention. In this step, the goal is to evaluate the pilot intervention that was developed in step (b). Results from the evaluation should be integrated into the design of the intervention and full study design (for example, determining when and how data are collected or in the selection of who participates in the intervention).
Collaborative research at this stage can help assess whether the pilot intervention had the short- and medium-term impacts that the intervention’s theory of change posited, and why it may have failed or succeeded, whether the intervention may have affected different groups differentially, and whether there were unanticipated consequences. The results of research at this step determines the next step in the process. Should the solution be implemented and evaluated at scale? Or, should the solution be redesigned and improved? After agreeing to the goals and scope of the research, we recommend letting the academic side take the lead in conducting the evaluation, with insight provided by the practitioner side.
Redesign and improve the solution. In this step, the team works together to review steps (a-c) and make changes or tweaks to develop a stronger intervention. Repeating the loop at this point provides a focal point for building productively on what practitioners and academics have learned from the failures and successes of the previous iteration. Continuing with another iteration strengthens the relationship and builds capacity for the synthesis of lessons learned and the coherent accumulation of knowledge and evidence.
Below is a selection of engaged scholarship projects with practitioner partners to demonstrate the model’s outputs. In addition to developing applied outputs in the form of research briefs, we also aim to write learning cases to document challenges and outcomes of the partnership itself, including how we traversed the engaged scholarship loop in the model.
Rapid Survey to Inform Covid-19 Response in Sierra Leone. In this project designed at the outset of the pandemic, we partnered with a local research and policy group as well as government agencies in order to ensure research was useful, and also that results would be taken up by relevant decision-makers. Results from our collaborative research helped to inform the design of the second lock-down in Sierra Leone. (See our research brief, trust outcomes, and lessons learned.)
Examining the Impact of Civic Leadership Training in the Philippines. Through qualitative fieldwork in the Philippines, we adapted the research based on pilots our partners had done previously. The research focused on community-level training of citizens and officials together and separately, and evaluated the effects of cooptation of community leaders by local officials. (See our research brief and learning case, and longer research report.)
Testing Access to Information in East Africa with Mystery Shoppers. We conducted fieldwork to figure out the target populations and what modality was best for delivering the intervention to understand the following questions: how should ordinary citizens in Kenya make requests for information about local service provision and government performance? With our partners we field-tested several iterations of a mystery shopper experiment to better understand the ordinary citizens’ experience requesting information from their local government. (See research brief on results from Kenya, Tanzania, and learning case on the partnership model.)
A Novel Approach to Civic Pedagogy: Training Grassroots Organizers on WhatsApp. We collaborated with Grassroot, a civic technology organization based in South Africa, to pilot and evaluate a course on leadership development for community organizers taught over WhatsApp, one of the world’s most popular messaging apps. This project has many iterations in the engaged scholarship loop, especially in refining the research question and adapting the intervention through focus groups, on- and offline simulations, and observed behavior. (See our research brief, learning case, and guide to teaching on WhatsApp.)
Our model has a number of important strengths:
Challenges of the model include:
In addition to using engaged scholarship in our work, we are also interested in producing public goods —guides, tools, and case studies— to support others interested in exploring engaged scholarship in their own contexts and relationships.
How to have difficult conversations. As referenced above, we developed a practical guide for engaged scholarship with recommendations on how to strengthen and improve academic-practitioner research collaborations (see also EGAP’s methods guide on 10 Conversations that Implementers and Evaluators Need to Have).
Risk and Equity Matrix. An exercise for practitioner-academic research teams to systematically consider potential impacts for the range of actors involved in the research process.
As with all models, there is room to improve. We continue to experiment with a range of possible improvements and experimenting with group and collaborative learning in addition to strong bilateral partnerships. Feedback is welcome mitgovlab@mit.edu.