There has been further discussion about my appraisal comments regarding the Cambodia rural development program discussed in my last blog. The following is drawn from my email response, seeking to clarify the issues from my perspective…
The issues that you have raised in your feedback are a fractal of a broader debate taking placed within the field of Design, Monitoring and Evaluation. I wholly agree with everything you have articulated in your last email, including the limitations of linear causality and the rigid ‘blue print’ (Fowler) approach to the design of social change interventions (for which the logframe is one prominent tool). I agree with much of the critique of the logframe, including Chambers and others (e.g.Gasper, Smilie, Smutylo, Roche, Kaplan, den Heyer, Lavergne). However, while agreeing completely with your line of argument, my conclusion is slightly different.
The heart of the issue we are debating is complexity…the complexity of social change. More specifically, this is a debate about how we conceive of/represent social complexity (i.e. ontology) and how we understand/interpret social complexity (i.e. epistemology).
The majority of commentators in the international aid space now acknowledge the pragmatic reality that social change is complex; and hence recognise the need for more sophisticated approaches to respond to this complexity. The majority of people also believe that an iterative approach (e.g. action-learning) has integrity as a way of grappling with this complexity.
So, we may agree that the predominant view at this time rejects an ontology of simple-linear social change (as represented in the logframe), and embraces an epistemology of emergent knowledge rather than defined-in-advance, structured knowledge (as implied in a blue print).
I believe we are singing from the same hymn book up to here (?).
Where I think we diverge is on the matter of scale. In other words, where in the scale of social complexity should the emergent/iterative/action-learning approach be most effectively applied? You compellingly argue that it should be applied all the way to the ‘coal face’ (i.e. within the ‘project level’). I argue that action at ‘the coal face’ should take place within a broader iterative framework (i.e. the ‘program level’) but the ‘theory of change’ of any single project should itself (to the extent possible) be defined. (N.B. ‘defined’ is importantly different to ‘small’…w.r.t metaphorically “taking smaller mouthfuls of the problem”).
While the academic in me warms to the former idea, the pragmatic (perhaps jaded?) practitioner in me has concerns, hence the queries I raised in my appraisal note. My concerns arise from two sources of ambiguity:
- ambiguity about operational management
- ambiguity about how lessons will actually be captured and applied (in practice)
The ‘operational-management-ambiguity’ arises from the pressure imposed on the implementation team to make the emergent/iterative approach work on the ground. Without knowing the specifics of the team’s capacity, my concern was based on a general observation that humans tend to need a plan and a limited focus in order to get things done. When an NGO worker gets up in the morning, rides to the office on his/her motor-bike and asks him/herself “what am I going to do today?”, there needs to be a clear answer. Having a clear answer to that question is about the project plan. Without this clarity, management and logistics required for day-to-day operations becomes even more difficult than it already is in the development context. Applying the emergent/iterative approach at the ‘coal face’ effectively delegates the work of the design team to the implementation team, thereby increasing their workload and capacity requirements.
The ‘practical-application-of-lessons-ambiguity’ recognises that the whole raison d’être for an iterative/emergent approach to development is to learn in the face of complexity. As you so succinctly articulated, “action-learning and participatory approaches are ways to help through the complexity of the real life development context”. But beyond the acknowledged ideal of being a so-called ‘learning organisation’, our persistent challenge is to identify practical mechanisms to actually capture and apply lessons. I like the practical definition of ‘learning’ by Gharajedaghi (who studies chaos and complexity…arguably a highly relevant field to this discussion :))…
“Learning results from being surprised: detecting a mismatch between what was expected to happen and what actually did happen. If one understands why the mismatch occurred (diagnosis) and is able to do things in a way that avoids a mismatch in the future (prescription), one has learned.”
I like this definition because it nails the nexus between emergent/iterative process and structured/practical action. As implied in my appraisal, I believe that adopting an emergent/iterative approach at the program level is an essential means to learning and effectiveness…personally, I think this should be the overarching program philosophy that guides all that any development agency does. Individual project designs then become a way for us to articulate “what we expect to happen” and M&E; becomes a way for us to “detect a mismatch”.
If action at the ‘coal face’ is completely emergent how do we hope to decipher lessons? At what frequency will the iterations of learning occur? Will these iterations be long enough (within the life of a single project) to actually manifest clear trends and lessons (given the seasonal nature of rural/agricultural life and the pace of social transformation)? What is there to safe-guard the project from whimsically (aka ‘responsively’) being influenced by local power and politics (competing agendas) within beneficiary communities? Is there an assumption that there is consensus within the beneficiary community on what ‘the problems’ to be tackled are? Are there sufficient resources allocated to allow the implementation team to pull on the ‘hand-brake’, rigorously analyse the context/findings, and genuinely implement the critical reflection demanded by action-learning cycles?
In the context of the proposed Cambodian program, it may be well-and-good for an appraisal committee to endorse an emergent/iterative/learning approach to field work, but at the end of the day, people wearing the NGO’s t-shirts have to extract meaningful information about the success/failure of their collective actions…they have to learn. If this can not be achieved, the emergent/iterative approach to field work is likely to become a fumbling ad hoc journey into ineffectiveness.
This thinking has been much better articulated by several respected commentators. Most notably, Dennis Rondinelli wrote a pivotal book called “Development projects as policy experiments”. The essence of his thesis is that we need to dis-invest from the belief that any single project is an end in itself…it is unlikely to substantively ‘heal the world’. Rather, the best we can do is treat each individual project as a means…as a social experiment…so that we (at the program level) can iteratively learn our way towards effectiveness, by extracting from each intervention context, precisely what the causes of failure and drivers of success actually are. Each project simply becomes one piece in a bigger puzzle of learning about social complexity. Several key thinkers within the world’s prominent INGOs are moving in this direction. For example, World Vision’s recently deployed ‘LEAP Framework’ clearly articulates an iterative program approach; as does Actionaid’s ‘ALPS Framework’.
I offer all of this humbly recognising that I am not on the ground in Cambodia. My distance may offer some objectivity, but equally, it may cloud my judgement. If there is a solid rationale for the design, you can count me as a supporter. My ‘black hat’ discourse is merely in the interests of pursuing effectiveness. Whatever approach is adopted, the challenge is to critically assess and learn from the outcomes that transpire. In the language of scientists, even a ‘null hypothesis’ offers useful learning.