Building State Capability Logo

The Building State Capability (BSC) program at the Center for International Development (CID) at Harvard University researches strategies and tactics to build the capability of public organizations to implement policies and programs.

Twitter icon Vimeo icon Microphone icon Facebook icon Linkedin icon

Core Principles

The BSC team uses the Problem Driven Iterative Adaptation (PDIA) approach, which rests on four core principles:

Puzzle Piece Icons local solutions for local problems
pushing problem driven positive deviance
Circle Icons try, learn iterate, adapt
scale through diffusion

Podcasts

ddd-cover

What is PDIA?

Recent Publications

Getting Real about Unknowns in Complex Policy Work

As with all public policy work, education policies are demanding. Policy workers need to ‘know’ a lot—about the problems they are addressing, the people who need to be engaged, the promises they can make in response, the context they are working in, and the processes they will follow to implement. Most policy workers answer questions about such issues within the structures of plan and control processes used to devise budgets and projects. These structures limit their knowledge gathering, organization and sense-making activities to up-front planning activities, and even though sophisticated tools like Theories of Change suggest planners ‘know’ all that is needed for policy success, they often do not. Policies are often fraught with ‘unknowns’ that cannot be captured in passive planning processes and thus repeatedly undermine even the best laid plans. Through a novel strategy that asks how much one knows about the answers to 25 essential policy questions, and an application to recent education policy interventions in Mozambique, this paper shows that it is possible to get real about unknowns in policy work. Just recognizing these unknowns exist—and understanding why they do and what kind of challenge they pose to policy workers—can help promote a more modest and realistic approach to doing complex policy work.
Read more

Can Africa Compete in World Soccer?

Andrews, Matt. 2022. “Can Africa Compete in World Soccer?”. Abstract
In March 2021, the Confederation of African Football’s President, Patrice Motsepe, insisted that “An African team must win the World Cup in the near future.” This visionary statement is infused with hope—not just for an African World Cup victory but for a fuller future in which African men’s soccer competes with world soccer’s elite. This paper asks if there is any chance of this happening. It suggests a simple method to assess how a country competes as both a ‘participant’ and a ‘rival’ and uses this method to examine how Africa’s top countries compete in world soccer. This analysis points to a gap between such countries and the world’s best, which has grown in recent decades—even though some African countries do compete more over time. The paper concludes by suggesting that Africa’s hope of winning the World Cup is not impossible but demands more active work, focused particularly on ensuring top African countries compete with more high-quality competition more often. The conclusion also suggests that the research approach might be relevant beyond a study of African soccer. It could particularly help shed light on how well African countries compete (as participants and rivals) in the world economy.
Read more

Successful Failure in Public Policy Work

It matters if public policies succeed in solving societal problems, but a dominant narrative holds that policies fail ‘often’. A large-sample study discussed in this paper suggests that this is not accurate, however. The most common policy result in this study is more ambiguous—what I call ‘successful failure’. Such result is achieved when a policy delivers enough low-level, short-term product to promise success, but ultimately (and repeatedly) fails to contribute to sustained high-level, long-term impact (addressing the problems citizens care about). Such ‘successful failure’ is endemic to public policy work, and a more pernicious result than outright failure: It allows policy design and implementation actors to associate with incomplete near-run success but insulate themselves from future failure (which they blame on factors and actors beyond their control) and simultaneously enjoy repeated demand for work (because problems are never really solved).
Read more

Let’s Take the Con Out of Randomized Control Trials in Development: The Puzzles and Paradoxes of External Validity, Empirically Illustrated

The enthusiasm for the potential of RCTs in development rests in part on the assumption that the use of the rigorous evidence that emerges from an RCT (or from a small set of studies identified as rigorous in a “systematic” review) leads to the adoption of more effective policies, programs or projects. However, the supposed benefits of using rigorous evidence for “evidence based” policy making depend critically on the extent to which there is external validity. If estimates of causal impact or treatment effects that have internal validity (are unbiased) in one context (where the relevant “context” could be country, region, implementing organization, complementary policies, initial conditions, etc.) cannot be applied to another context then applying evidence that is rigorous in one context may actually reduce predictive accuracy in other contexts relative to simple evidence from that context—even if that evidence is biased (Pritchett and Sandefur 2015). Using empirical estimates from a large number of developing countries of the difference in student learning in public and private schools (just as one potential policy application) I show that commonly made assumptions about external validity are, in the face of the actual observed heterogeneity across contexts, both logically incoherent and empirically unhelpful. Logically incoherent, in that it is impossible to reconcile general claims about external validity of rigorous estimates of causal impact and the heterogeneity of the raw facts about differentials. Empirically unhelpful in that using a single (or small set) of rigorous estimates to apply to all other actually leads to a larger root mean square error of prediction of the “true” causal impact across contexts than just using the estimates from non-experimental data from each country. In the data about private and public schools, under plausible assumptions, an exclusive reliance on the rigorous evidence has RMSE three times worse than using the biased OLS result from each context. In making policy decisions one needs to rely on an understanding of the relevant phenomena that encompasses all of the available evidence.
Read more
More Publications

PDIA Toolkit

Toolkit Cover

download button download button - Spanish

IPP Blogs