Katherine Hay
1
Community of Evaluators, Working Paper Series, No-3, 2011
Abstract
It would be nave and simplistic to assume that strengthening the supply of, and
the demand for, evaluation would make decision making transparent,
technocratic, rational and linear. As Boyle Lemaire and Rist (1999) note
evaluation findings may be 'drowned out' by other aspects in the political
context, and, 'often for good reason' (pg. 5). The idea of evaluation field
building developed in this paper, encompasses both the need for strengthening
the quality and practice of evaluation, and, broadening the space and platforms
for using evaluation knowledge in decision making. It calls for a more deeply
contextualized understanding of what quality, rigor, and use should entail. In
doing so it embraces, rather than ignores, the complexity of decision making
and implementation systems.
Making evaluation matter in the cycle of policy, programming, research, and
evaluation that must constitute sound, evidence-based, aspirational,
development is not simple. 'Aspirational' is used here intentionally, to reflect a
view of evaluation that goes beyond measuring performance or management
oriented functions; evaluation, here, embodies a recognition that
development is not adequately meeting the needs of most citizens. This
recognition brings with it a need for shifts, critiques, and democratization
processes to fundamentally change the systems and institutions of decision
making, and the ways that both connect with citizens. Evaluation can be used
to reinforce existing and dominant development systems, discourses, and
approaches. It can also be used to challenge them. Evaluation, here, is
conceptualized as being part of dynamic, critical, and change oriented
processes.
Taking the case of South Asia, this paper explores and develops a framework for
evaluation field building. The paper suggests elements a robust evaluation
field should include and maps these against the current scenario in South Asia.
The paper then proposes strategies for field building to support and strengthen
this evolution (indeed revolution) in evaluation practice and use.
The 'Field of evaluation
What is the 'field of evaluation' and why does it matter? Before attempting to
answer those questions, first some definitions.
Program and policy evaluation is the systematic application of research
methods to assess program or policy design, implementation, and
effectiveness, and the processes to share and use the findings of these
assessments. Evaluation practice is the 'doing' of evaluation. Evaluation
capacity is the ability to do evaluation, and evaluation use is the application of
3
Community of Evaluators, Working Paper Series, No-3, 2011
4
Community of Evaluators, Working Paper Series, No-3, 2011
The example illustrates that each part of the system creates pull on other parts
of the system.While the example highlights a scenario of positive pull
factors,elements of the system also create drag or weaken other elements. The
parts of the system co-evolve within broader contexts that are also coevolving.For example, take the case of University programs in evaluation.
Demand and supply for such programs is happening within a broader set of
changes in University environments, employment opportunities for evaluators,
pulls from other sectors and disciplines for faculty and students, etc., all of
which create incentives and disincentives on this field building activity.
Building the field of evaluation entails understanding the connections between
and co-evolution of key elements a field of practice. This paper proposes a
framework of evaluation field building that includes 4elements: people,
knowledge, spaces, and institutions.
Figure 2 : elements of the evaluation field
5
Community of Evaluators, Working Paper Series, No-3, 2011
1.
2.
A field has spaces or forums for building the knowledge, skills, and
credentials of members. A field has organizations that facilitate
coordinated action, spaces for information exchange and analysis, and
systems, guidelines, and norms.
3.
4.
Who does evaluation matters. Calling this the real evaluation gap, Carden (2007)
argues: It is critical that the citizens, development researchers and professionals
of southern nations lead the way in building the field of evaluation research and
practice in their region.Despite the efforts of evaluators and researchers all over
the region, there is a lack of an identifiable mass of South Asian evaluation experts
based in, and / or supporting, the evaluation work of academia, government, policy
research organizations, and development organizations. At the implementation or
grassroots level, and among resource groups supporting grass roots development,
there are thousands of people and groups engaged in and learning from innovative
and contextualized development work. Such work is often evaluative in nature. It
is difficult to know the breadth and depth of this work. This work, and the
evaluators who do it, are dispersed and unconnected. This is a gap but one that also
presents untapped promise. The grounded experience of such practitioners can
help situate evaluation theory, methods, and application within a framework of
use and practice.
A challenge is the sheer numbers of skilled evaluators needed given the size of
programs and populations in the region.Shiva Kumar (2010) comments on the
paradoxical situation of evaluation capacity in India, noting:
At a macrolevel, India has a reasonable (even impressive) capacity to undertake
evaluations. Indeed, many well-established universities, policy think-tanks,
social science research institutions and colleges both within and outside
government have a pool of experienced evaluators. However, on closer
6
Community of Evaluators, Working Paper Series, No-3, 2011
examination, we find that there simply aren't enough institutions with the
capacity to conduct evaluations for a country of India's size and diversity. Also,
evaluation capacity is unevenly spread across the country.... VeIndianry few
countries have evolved a strategy to develop adequate capacity to carry out
evaluations at the state, district and village levels (pg. 239).
7
Community of Evaluators, Working Paper Series, No-3, 2011
many other methodologies originating from the north were tested in, and are
now commonly used in development evaluation in South Asia (Chambers,
1994; Ashford and Patkar, 2001; Davies and Dart, 2003; Earl et al. 2001). Most of
these methodologies emerged from northern roots or have been articulated
from the north. If some of these approaches have been owned, co-developed,
or modified in South Asia, how did this happen? Are South Asian evaluators
implementers and managers, or are they also engaging in evaluation research
and theory? Clearly both are important. The point is not to suggest a false
dichotomy; the connection between theory and practice leads to innovation in
both. However, a lack of conceptual work on evaluation in South Asia, limits the
advancement of the evaluation field both in South Asia and beyond. The field of
evaluation needs continued and deepened theory and practice that is rooted in
contexts, needs, and cultures.
In addition to evaluators and evaluation researchers, a strong evaluation field
needs leaders.Who are the lead institutes on development evaluation in the
region? Who are the leading writers and thinkers on development evaluation
and evaluation research in South Asia?Organizations with a mandate of, and
expertise in, rigorous multi-method evaluation are few or non-existent in some
countries in South Asia. The work of South Asian thought leaders is largely
unseen in existing evaluation forums.
If we cannot identify them, is it because they don't exist or because this
leadership is nascent? Or are they there, but we don't know who they are?
Both the question of limited numbers and invisibility are problems, though
different in nature and demanding different responses. How can local and
global evaluation communities and proponents, including state and non-state
actors and organizations, identify and support emerging leaders to take on
increasing leadership? What would this support look like?
The limited evaluation leadership in South Asia today does not only reflect an
absence of evaluation expertise. It also reflects a lack of space to: share
expertise, to be identified as a leader, guide others, and support and inform
evaluation field building. Being a leader, by definition, entails being recognized
as such by others, and in general that requires spaces of forums where
leadership can emerge and be articulated. Such spaces and structures are
explored in the next section.
8
Community of Evaluators, Working Paper Series, No-3, 2011
9
Community of Evaluators, Working Paper Series, No-3, 2011
10
Community of Evaluators, Working Paper Series, No-3, 2011
stopped with one issue. Even if we take non-evaluation specific journals where
key issues of public policy are discussed and raised and examine the extent to
which evaluation results are shared this absence remains.
Evaluation knowledge
A field has a knowledge base, or credible evidence of results, derived from
research and practice.
2
See www.communityofevaluators.org
12
Community of Evaluators, Working Paper Series, No-3, 2011
For example, the changing role of movements and the judiciary in pushing for
evidence based policy making in South Asia provides new opportunities for
institutionalization efforts. Both suggest a need to expand thinking on
evaluation demanders and to consider and target civil society players, the
judiciary, and social movements as potential evaluation users. Evaluation
needs to serve governments, donors, local decision makers, and citizens,
particularly those citizens most needing the gains of development. A
movement in this direction could, for example, connect evaluation to right to
information campaigns, accountability and anti-corruption movements,
gender and rights based movements; such shifts could expand the questions
evaluation is exploring and deepen demand for making evaluations more
publically accessible and available.
Evaluation field building strategies that work with multiple platforms and
players probably makes sense everywhere, but particularly so in contexts
where governance is weak, fluctuating, and corrupt.Weiss' (2009)
understanding of policy windows is helpful here. Some windows of use are
closing just as others may be opening. For example, there may be demand from
government for particular types of data to address management questions
while windows for evaluation as a space for critique of policies and programs or
a tool for democratic dialogue may be closing. Many policy makers complain
about the quality of evaluation but also shield politically important programs
from the lens of evaluation or are resistant to learning from evaluation.
Work in and on the institutional setting should be towards increasingly open
and supportive policies and cultures of evaluation. Such shifts are by nature
negotiated, involve power, require strategy and responsiveness to 'open
windows' by policy entrepreneurs and advocates. This work is arguably one of
the functions that leaders in evaluation, as they emerge, can contribute
to.Development is not neutral or technocratic and nor is evaluation field
building. Field building will require astute strategizing, alliance building, and
entrepreneurship.
How does a field get built? strategies for field building and evaluating field
building
Recognizing that there is no 'best practice' blueprint to follow from north or
south on evaluation field building, the interesting question that follows is:
What unique mix of evaluation field building could lead to evaluation
leadership, quality, and innovation in South Asia? The elements of the
evaluation field relate to each other in different ways and these relationships
14
Community of Evaluators, Working Paper Series, No-3, 2011
15
Community of Evaluators, Working Paper Series, No-3, 2011
evaluation researchers, and the spaces and structures to support their work is
an important starting point upon which other elements of the evaluation field
can connect.Strengthening the people involved in evaluation could include
capacity building programs, graduate curricula, and executive training. This
work could encompass prospecting for, identifying, engaging with, and creating
incentives to encourage and reward leadership, including connecting with
researchers and social scientists from various fields to draw them into the field
of evaluation.There are multiple evaluation craft persons (to use Toulemonde's
language, 1995) flourishing in the region. However they are dispersed,
unconnected, and often unlinked to broader debates, systems, and theory
building. To learn from their work more systematically and connect this
experience and knowledge of practitioners into the field of evaluation in South
Asia there is a need for networking, associations, and communities of practice.
In the context of evaluation field building the idea of spaces for learning goes
beyond an instrumentalist approach that sees the overall goal as improving
evaluation for donors, to supporting researchers and evaluation practitioners
to build evaluation communities, culture, theory, and practice in support of
local, national, and regional development strategies and programmes. Field
building entails experimentation and indigenous innovation, building on the
best ideas available but creating something better.Evaluation field building
should include support for open communities of practice and experimentation
with ways to accelerate learning across, and from, bottom-up processes. It
could also include supporting writing, exchange of ideas at events, meetings,
networks, and conferences and fostering structures to support information
exchange and problem solving within and across groups such as
meetings,listservs, or virtual spaces.
Work on these foundations of field building will help build other aspects of the
field. For example strengthening capacity of Southern evaluators through their
involvement in evaluations will by extension also contribute new knowledge in
evaluation.
Support and strategies should recognize and reflect the multiple timelines
involved in development and development research. There are immediate
problems and challenges where quality evaluation can help to identify what is
working from what is not and bring new evidence to bear on pressing policy and
programming questions. Evaluators and researchers must bring the tools,
skills, and practice of evaluation to bear on these questions for field building
work to be relevant. There are complex problems of understanding change
process or how systems shift. Work on building new knowledge around ways to
understand this can be applied to range of persistent challenges in the medium
term. Finally, building and supporting leadership and structures of evaluation
practice, in parallel with (and connected to the short and medium term
strategies) in the long term, builds the future ability of evaluators to respond to
16
Community of Evaluators, Working Paper Series, No-3, 2011
18
Community of Evaluators, Working Paper Series, No-3, 2011
19
Community of Evaluators, Working Paper Series, No-3, 2011