This is one of a series of working papers from “RISE"—the large-scale education systems research programme supported by the UK’s Department for International Development (DFID), Australia’s Department of Foreign Affairs and Trade (DFAT), and the Bill and Melinda Gates Foundation.
Support for gender equality has risen, globally. Analyses of this trend focus on individual and/or country-level characteristics. But this overlooks sub-national variation. Citydwellers are more likely to support gender equality in education, employment, leadership, and leisure. Why is this? This paper investigates the causes of rural-urban differences through comparative, qualitative research. It centres on Cambodia, where the growth of rural garment factories enables us to test theories that female employment fosters support for gender equality: potentially closing rural-urban differences; or whether other important aspects of city-living accelerate support for gender equality. Drawing on this rural and urban fieldwork, the paper suggests why social change is faster in Cambodian
cities. First, cities raise the opportunity costs of gender divisions of labour – given higher living costs and more economic opportunities for women. Second, cities increase exposure to alternatives. People living in more interconnected, heterogeneous, densely populated areas are more exposed to women demonstrating their equal competence in socially valued, masculine domains. Third, they have more avenues to collectively contest
established practices. Association and exposure reinforce growing flexibility in gender divisions of labour. By investigating the causes of subnational variation, this paper advances a new theory of growing support for gender equality.
Evaluations of development projects are conducted to assess their net effectiveness and, by extension, to guide decisions regarding the merits of scaling-up successful projects and/or replicating them elsewhere. The key characteristics of ‘complex’ interventions – numerous face- to-face interactions, high discretion, imposed obligations, pervasive unknowns – rarely fit neatly into standard evaluation protocols, requiring the deployment of a wider array of research methods, tools and theory. The careful use of such ‘mixed methods’ approaches is especially important for discerning the conditions under which ‘successful’ projects of all kinds might be expanded or adopted elsewhere. These claims, and the practical implications to which they give rise, draw on an array of recent evaluations in different sectors in development.
Rising standards for accurately inferring the impact of development projects has not been matched by equivalently rigorous procedures for guiding decisions about whether and how similar results might be expected elsewhere. These 'external validity' concerns are especially pressing for 'complex' development interventions, in which the explicit purpose is often to adapt projects to local contextual realities and where high quality implementation is paramount to success. A basic analytical framework is provided for assessing the external validity of complex development interventions. It argues for deploying case studies to better identify the conditions under which diverse outcomes are observed, focusing in particular on the salience of contextual idiosyncrasies, implementation capabilities and trajectories of change. Upholding the canonical methodological principle that questions should guide methods, not vice versa, is required if a truly rigorous basis for generalizing claims about likely impact across time, groups, contexts and scales of operation is to be discerned for different kinds of development interventions.
There is an inherent tension between implementing organizations—which have specific objectives and narrow missions and mandates—and executive organizations—which provide resources to multiple implementing organizations. Ministries of finance/planning/budgeting allocate across ministries and projects/programmes within ministries, development organizations allocate across sectors (and countries), foundations or philanthropies allocate across programmes/grantees. Implementing organizations typically try to do the best they can with the funds they have and attract more resources, while executive organizations have to decide what and who to fund. Monitoring and Evaluation (M&E) has always been an element of the accountability of implementing organizations to their funders. There has been a recent trend towards much greater rigor in evaluations to isolate causal impacts of projects and programmes and more ‘evidence base’ approaches to accountability and budget allocations. Here we extend the basic idea of rigorous impact evaluation—the use of a valid counter-factual to make judgments about causality—to emphasize that the techniques of impact evaluation can be directly useful to implementing organizations (as opposed to impact evaluation being seen by implementing organizations as only an external threat to their funding). We introduce structured experiential learning (which we add to M&E to get MeE) which allows implementing agencies to actively and rigorously search across alternative project designs using the monitoring data that provides real time performance information with direct feedback into the decision loops of project design and implementation. Our argument is that within-project variations in design can serve as their own counter-factual and this dramatically reduces the incremental cost of evaluation and increases the direct usefulness of evaluation to implementing agencies. The right combination of M, e, and E provides the right space for innovation and organizational capability building while at the same time providing accountability and an evidence base for funding agencies.